modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
KaeriJenti/Kaori-34b-v2 | KaeriJenti | "2023-12-21T08:02:49Z" | 1,348 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-21T05:59:44Z" | ---
license: llama2
---
<h1>Kaori-34b-v2 Model Card</h1>
This Model was Finetuned By Kaeri and Jenti.
<h3>Datasets</h3>
- Open-Platypus
- Dolphin
- OpenOrca
We trained the model with <b>100%</b> Open-Platypus data, <b>5%</b> Dolphin data and <b>10%</b> OpenOrca data and applied SFT strategy.
We did not use GSM8k samples when generating data.
Also we were careful of data contamination by similarity filtering
the training data if the data correspond to any of the following list.
<pre>
filtering_tasks = [
'cot_gsm8k',
'cot_gsm8k_ii',
'drop:2.0.0',
'winogrande:1.1.0'
'task228_arc_answer_generation_easy',
'ai2_arc/ARC-Challenge:1.0.0',
'ai2_arc/ARC-Easy:1.0.0',
'task229_arc_answer_generation_hard',
'hellaswag:1.1.0',
'task1389_hellaswag_completion'
]
</pre>
<h3>Framework:</h3>
- https://github.com/hiyouga/LLaMA-Factory
<h3>Parameters:</h3>
- Finetune_Type : LoRA
- GPUs : A100x4(80GB)
- Epochs : 3
- Batchsize : 8
|
kekmodel/StopCarbon-10.7B-v2 | kekmodel | "2024-01-03T16:57:26Z" | 1,348 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-30T08:07:00Z" | ---
license: cc-by-nc-4.0
language:
- en
tags:
- merge
---
# StopCarbon
This model is an experimental version created using [mergekit](https://github.com/cg123/mergekit).
- merge models
- upstage/SOLAR-10.7B-Instruct-v1.0
- VAGOsolutions/SauerkrautLM-SOLAR-Instruct
- merge_method: ties
# Prompt Template(s)
```
### User:
{user}
### Assistant:
{asistant}
``` |
jeonsworld/CarbonVillain-en-10.7B-v5 | jeonsworld | "2024-01-02T10:57:58Z" | 1,348 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-30T17:55:57Z" | ---
license: cc-by-nc-sa-4.0
language:
- en
---
# CarbonVillain
**This is a model created without learning to oppose indiscriminate carbon emissions.**
This model is an experimental version created using [mergekit](https://github.com/cg123/mergekit).
- merge models
-
- method: slerp
# Prompt Template(s)
```
### User:
{user}
### Assistant:
{asistant}
```
# Evaluation
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v4)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | |
| ARC (25-shot) | |
| HellaSwag (10-shot) | |
| MMLU (5-shot) | |
| TruthfulQA (0-shot) | |
| Winogrande (5-shot) | |
| GSM8K (5-shot) | | |
ewqr2130/mistral-7b-raw-sft | ewqr2130 | "2024-01-08T18:24:02Z" | 1,348 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-08T18:11:33Z" | ---
license: mit
---
take the mistral raw 7b model, and run SFT on it. 6000 epoch.
take the mistral raw 7b model, and run SFT on it. 6000 epoch.
take the mistral raw 7b model, and run SFT on it. 6000 epoch.
take the mistral raw 7b model, and run SFT on it. 6000 epoch. |
maywell/PiVoT-SUS-RP | maywell | "2024-01-15T10:37:23Z" | 1,348 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-12T12:34:19Z" | ---
license: apache-2.0
---
## Model Description
PiVoT-RP model based on SUS-34B
Train with LoRA,
Private Dataset used.
Seq Len = 8192.
Super Fast **unsloth used** while training. :>
Follow me on twitter: https://twitter.com/stablefluffy
Consider Support me making these model alone: https://www.buymeacoffee.com/mwell or with Runpod Credit Gift 💕
Contact me on Telegram: https://t.me/AlzarTakkarsen |
Neuronovo/neuronovo-9B-v0.4 | Neuronovo | "2024-01-16T20:44:46Z" | 1,348 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:mlabonne/chatml_dpo_pairs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-16T20:32:21Z" | ---
license: apache-2.0
datasets:
- Intel/orca_dpo_pairs
- mlabonne/chatml_dpo_pairs
language:
- en
library_name: transformers
---
More information about previous [Neuronovo/neuronovo-9B-v0.2](https://huggingface.co/Neuronovo/neuronovo-9B-v0.2) version available here: 🔗[Don't stop DPOptimizing!](https://www.linkedin.com/pulse/dont-stop-dpoptimizing-jan-koco%2525C5%252584-mq4qf)
Author: Jan Kocoń 🔗[LinkedIn](https://www.linkedin.com/in/jankocon/) 🔗[Google Scholar](https://scholar.google.com/citations?user=pmQHb5IAAAAJ&hl=en&oi=ao) 🔗[ResearchGate](https://www.researchgate.net/profile/Jan-Kocon-2)
Changes concerning [Neuronovo/neuronovo-9B-v0.2](https://huggingface.co/Neuronovo/neuronovo-9B-v0.2):
1. **Training Dataset**: In addition to the [Intel/orca_dpo_pairs](Intel/orca_dpo_pairs) dataset, this version incorporates a [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs). The combined datasets enhance the model's capabilities in dialogues and interactive scenarios, further specializing it in natural language understanding and response generation.
2. **Tokenizer and Formatting**: The tokenizer now originates directly from the [Neuronovo/neuronovo-9B-v0.2](https://huggingface.co/Neuronovo/neuronovo-9B-v0.2) model.
3. **Training Configuration**: The training approach has shifted from using `max_steps=200` to `num_train_epochs=1`. This represents a change in the training strategy, focusing on epoch-based training rather than a fixed number of steps.
4. **Learning Rate**: The learning rate has been reduced to a smaller value of `5e-8`. This finer learning rate allows for more precise adjustments during the training process, potentially leading to better model performance. |
flemmingmiguel/MarcMistral-7B | flemmingmiguel | "2024-01-16T21:05:50Z" | 1,348 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"nfaheem/Marcoroni-7b-DPO-Merge",
"EmbeddedLLM/Mistral-7B-Merge-14-v0.5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-16T20:53:31Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- nfaheem/Marcoroni-7b-DPO-Merge
- EmbeddedLLM/Mistral-7B-Merge-14-v0.5
---
# MarcMistral-7B
MarcMistral-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [nfaheem/Marcoroni-7b-DPO-Merge](https://huggingface.co/nfaheem/Marcoroni-7b-DPO-Merge)
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.5](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.5)
As an experiment to find the best base merge to further fine-tuning, expect a lot of experiments named using parts of the component models until a clear winner emerges in the benchmarks
In this case merging the highest MMLU merge with a high ARC merge to see which qualities remain untouched or improv
## 🧩 Configuration
```yaml
slices:
- sources:
- model: nfaheem/Marcoroni-7b-DPO-Merge
layer_range: [0, 32]
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.5
layer_range: [0, 32]
merge_method: slerp
base_model: EmbeddedLLM/Mistral-7B-Merge-14-v0.5
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "flemmingmiguel/MarcMistral-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
TinyLlama/TinyLlama_v1.1_math_code | TinyLlama | "2024-06-07T01:23:44Z" | 1,348 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:cerebras/SlimPajama-627B",
"arxiv:2401.02385",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-03-09T09:40:09Z" | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
language:
- en
---
# TinyLlama-1.1B-v1.1
- **Codebase:** [github.com/jzhang38/TinyLlama](https://github.com/jzhang38/TinyLlama)
- **Technical Report:** [arxiv.org/pdf/2401.02385](https://arxiv.org/pdf/2401.02385)
<div align="center">
<img src="https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b/resolve/main/TinyLlama_logo.png" width="300"/>
</div>
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
## Overview
In this project, rather than only training a single TinyLlama model, we first train TinyLlama on a corpus of 1.5 trillion tokens to obtain foundational language capabilities. Subsequently, we take this model and turn it into three different models by continual pre-training with three distinct data sampling. For a visual representation of this process, please refer to the figure below.

## Pretraining
Due to these issues([bug1](https://whimsical-aphid-86d.notion.site/Release-of-TinyLlama-1-5T-Checkpoints-Postponed-01b266998c1c47f78f5ae1520196d194?pvs=4), [bug2](https://whimsical-aphid-86d.notion.site/2023-12-18-Updates-from-TinyLlama-Team-7d30c01fff794da28ccc952f327c8d4f)). We try to retrain our TinyLlama to provide a better model. We train our model with 2T tokens and divided our pretraining into 3 different stages: 1) basic pretraining, 2) continual pretraining with specific domain, and 3) cooldown .
#### Basic pretraining
In this initial phase, we managed to train our model with only slimpajama to develop its commonsense reasoning capabilities. The model was trained with 1.5T tokens during this basic pretraining period. Since we used a cluster with 4 A100-40G per node and we only shard model weights within a node, we can only set the batch size to approximately 1.8M this time.
#### Continual pretraining with specific domain
We incorporated 3 different kinds of corpus during this pretraining, slimpajama (which is the same as the first phase), Math&Code (starcoder and proof pile), and Chinese (Skypile). This approach allowed us to develop three variant models with specialized capabilities.
At the begining ~6B tokens in this stage, we linearly increased the sampling proportion for the domain-specific corpus (excluding Slimpajama, as it remained unchanged compared with stage 1). This warmup sampling increasing strategy was designed to gradually adjust the distribution of the pretraining data, ensuring a more stable training process. After this sampling increasing stage, we continued pretraining the model with stable sampling strategy until reaching ~1.85T tokens.
#### Cooldown
Implementing a cooldown phase has become a crucial technique to achieve better model convergence at the end of pretraining. However, since we have already used cosine learning rate strategy at the beginning, it becomes challenging to alter the learning rate for cooldown like what MiniCPM or deepseek does. Therefore, we try to cool down with adjusting our batch size. Specifically, we increase our batch size from 1.8M to 7.2M while keeping the original cosine learning rate schedule during our cooldown stage.
#### Tinyllama model family
Following an extensive and detailed pretraining process. We are now releasing three specialized versions of our model:
1. **TinyLlama_v1.1**: The standard version, used for general purposes.
2. **TinyLlama_v1.1_Math&Code**: Equipped with better ability for math and code.
3. **TinyLlama_v1.1_Chinese**: Good understanding capacity for Chinese.
## Data
Here we list our data distribution in each stage:
### TinyLlama_v1.1
| Corpus | Basic pretraining | Continual pretraining with specific domain | Cooldown |
| ------------- | ----------------- | ------------------------------------------ | -------- |
| Slimpajama | 100.0 | 100.0 | 100.0 |
### TinyLlama_v1.1_math_code
| Corpus | Basic pretraining | Continual pretraining with specific domain | Cooldown |
| ------------- | ----------------- | ------------------------------------------ | -------- |
| Slimpajama | 100.0 | 75.0 | 75.0 |
| starcoder | - | 15.0 | 15.0 |
| proof_pile | - | 10.0 | 10.0 |
### TinyLlama_v1.1_chinese
| orpus | Basic pretraining | Continual pretraining with specific domain | Cooldown |
| ------------- | ----------------- | ------------------------------------------ | -------- |
| Slimpajama | 100.0 | 50.0 | 50.0 |
| skypile | - | 50.0 | 50.0 |
### How to use
You will need the transformers>=4.31
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) GitHub page for more information.
```
from transformers import AutoTokenizer
import transformers
import torch
model = "TinyLlama/TinyLlama_v1.1"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.',
do_sample=True,
top_k=10,
num_return_sequences=1,
repetition_penalty=1.5,
eos_token_id=tokenizer.eos_token_id,
max_length=500,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
### Eval
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
| ----------------------------------------- | --------------- | --------- | --------- | ---------- | --------- | --------- | ----- | --------- | --------- |
| Pythia-1.0B | 300B | 47.16 | 31.40 | 53.43 | 27.05 | 48.99 | 60.83 | 69.21 | 48.30 |
| TinyLlama-1.1B-intermediate-step-1431k-3T | 3T | 59.20 | 36.00 | 59.12 | 30.12 | 55.25 | 57.83 | 73.29 | 52.99 |
| TinyLlama-1.1B-v1.1 | 2T | **61.47** | **36.80** | 59.43 | 32.68 | **55.47** | 55.99 | **73.56** | 53.63 |
| TinyLlama-1.1B-v1_math_code | 2T | 60.80 | 36.40 | **60.22** | **33.87** | 55.20 | 57.09 | 72.69 | **53.75** |
| TinyLlama-1.1B-v1.1_chinese | 2T | 58.23 | 35.20 | 59.27 | 31.40 | 55.35 | **61.41** | 73.01 | 53.41 | |
kaitchup/Phi-3-mini-4k-instruct-gptq-4bit | kaitchup | "2024-04-25T06:49:29Z" | 1,348 | 1 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-04-25T06:48:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ChenWeiLi/Med-ChimeraLlama-3-8B_SHERP | ChenWeiLi | "2024-05-20T08:13:50Z" | 1,348 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:mlabonne/ChimeraLlama-3-8B-v3",
"base_model:johnsnowlabs/JSL-MedLlama-3-8B-v2.0",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-14T01:10:36Z" | ---
base_model:
- mlabonne/ChimeraLlama-3-8B-v3
- johnsnowlabs/JSL-MedLlama-3-8B-v2.0
library_name: transformers
tags:
- mergekit
- merge
license: llama3
---
# Chimera_MedLlama-3-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [mlabonne/ChimeraLlama-3-8B-v3](https://huggingface.co/mlabonne/ChimeraLlama-3-8B-v3)
* [johnsnowlabs/JSL-MedLlama-3-8B-v2.0](https://huggingface.co/johnsnowlabs/JSL-MedLlama-3-8B-v2.0)
### Evaluation
- multimedqa (0 shot)</br>
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------------------------|-------|------|-----:|--------|-----:|---|-----:|
| - medmcqa |Yaml |none | 0|acc |0.6087|± |0.0075|
| | |none | 0|acc_norm|0.6087|± |0.0075|
| - medqa_4options |Yaml |none | 0|acc |0.6269|± |0.0136|
| | |none | 0|acc_norm|0.6269|± |0.0136|
| - anatomy (mmlu) | 0|none | 0|acc |0.6963|± |0.0397|
| - clinical_knowledge (mmlu) | 0|none | 0|acc |0.7585|± |0.0263|
| - college_biology (mmlu) | 0|none | 0|acc |0.7847|± |0.0344|
| - college_medicine (mmlu) | 0|none | 0|acc |0.6936|± |0.0351|
| - medical_genetics (mmlu) | 0|none | 0|acc |0.8200|± |0.0386|
| - professional_medicine (mmlu)| 0|none | 0|acc |0.7684|± |0.0256|
|stem |N/A |none | 0|acc_norm|0.6129|± |0.0066|
| | |none | 0|acc |0.6440|± |0.0057|
| - pubmedqa | 1|none | 0|acc |0.7480|± |0.0194|
|Groups|Version|Filter|n-shot| Metric |Value | |Stderr|
|------|-------|------|-----:|--------|-----:|---|-----:|
|stem |N/A |none | 0|acc_norm|0.6129|± |0.0066|
| | |none | 0|acc |0.6440|± |0.0057|
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: mlabonne/ChimeraLlama-3-8B-v3
layer_range: [0, 32]
- model: johnsnowlabs/JSL-MedLlama-3-8B-v2.0
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/ChimeraLlama-3-8B-v3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
second-state/Yi-1.5-9B-Chat-16K-GGUF | second-state | "2024-07-02T10:11:34Z" | 1,348 | 5 | null | [
"gguf",
"text-generation",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-05-17T14:33:38Z" | ---
base_model: 01-ai/Yi-1.5-9B-Chat-16K
inference: false
model_creator: 01-ai
model_name: Yi-1.5-9B-Chat-16
model_type: yi
pipeline_tag: text-generation
quantized_by: Second State Inc.
license: apache-2.0
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Yi-1.5-9B-Chat-16K-GGUF
## Original Model
[01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K)
## Run with LlamaEdge
<!-- - LlamaEdge version: [v0.10.0](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.10.0) and above -->
- LlamaEdge version: coming soon
-
- Prompt template
- Prompt type: `chatml`
- Prompt string
```text
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
- Context size: `16384`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Yi-1.5-9B-Chat-16K-Q5_K_M.gguf \
llama-api-server.wasm \
--prompt-template chatml \
--reverse-prompt "<|im_end|>" \
--ctx-size 16384 \
--model-name Yi-1.5-9B-Chat-16K
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Yi-1.5-9B-Chat-16K-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template chatml \
--reverse-prompt "<|im_end|>" \
--ctx-size 16384
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Yi-1.5-9B-Chat-16K-Q2_K.gguf](https://huggingface.co/second-state/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-Q2_K.gguf) | Q2_K | 2 | 3.35 GB| smallest, significant quality loss - not recommended for most purposes |
| [Yi-1.5-9B-Chat-16K-Q3_K_L.gguf](https://huggingface.co/second-state/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-Q3_K_L.gguf) | Q3_K_L | 3 | 4.69 GB| small, substantial quality loss |
| [Yi-1.5-9B-Chat-16K-Q3_K_M.gguf](https://huggingface.co/second-state/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-Q3_K_M.gguf) | Q3_K_M | 3 | 4.32 GB| very small, high quality loss |
| [Yi-1.5-9B-Chat-16K-Q3_K_S.gguf](https://huggingface.co/second-state/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-Q3_K_S.gguf) | Q3_K_S | 3 | 3.9 GB| very small, high quality loss |
| [Yi-1.5-9B-Chat-16K-Q4_0.gguf](https://huggingface.co/second-state/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-Q4_0.gguf) | Q4_0 | 4 | 5.04 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Yi-1.5-9B-Chat-16K-Q4_K_M.gguf](https://huggingface.co/second-state/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-Q4_K_M.gguf) | Q4_K_M | 4 | 5.33 GB| medium, balanced quality - recommended |
| [Yi-1.5-9B-Chat-16K-Q4_K_S.gguf](https://huggingface.co/second-state/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-Q4_K_S.gguf) | Q4_K_S | 4 | 5.07 GB| small, greater quality loss |
| [Yi-1.5-9B-Chat-16K-Q5_0.gguf](https://huggingface.co/second-state/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-Q5_0.gguf) | Q5_0 | 5 | 6.11 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Yi-1.5-9B-Chat-16K-Q5_K_M.gguf](https://huggingface.co/second-state/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-Q5_K_M.gguf) | Q5_K_M | 5 | 6.26 GB| large, very low quality loss - recommended |
| [Yi-1.5-9B-Chat-16K-Q5_K_S.gguf](https://huggingface.co/second-state/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-Q5_K_S.gguf) | Q5_K_S | 5 | 6.11 GB| large, low quality loss - recommended |
| [Yi-1.5-9B-Chat-16K-Q6_K.gguf](https://huggingface.co/second-state/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-Q6_K.gguf) | Q6_K | 6 | 7.25 GB| very large, extremely low quality loss |
| [Yi-1.5-9B-Chat-16K-Q8_0.gguf](https://huggingface.co/second-state/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-Q8_0.gguf) | Q8_0 | 8 | 9.38 GB| very large, extremely low quality loss - not recommended |
| [Yi-1.5-9B-Chat-16K-f16.gguf](https://huggingface.co/second-state/Yi-1.5-9B-Chat-16K-GGUF/blob/main/Yi-1.5-9B-Chat-16K-f16.gguf) | f16 | 16 | 17.7 GB| |
*Quantized with llama.cpp b3135*
|
QuantFactory/Halu-8B-Llama3-v0.35-GGUF | QuantFactory | "2024-06-04T09:22:43Z" | 1,348 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"base_model:Hastagaras/Halu-8B-Llama3-v0.35",
"license:llama3",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-02T11:26:55Z" | ---
library_name: transformers
license: llama3
base_model: Hastagaras/Halu-8B-Llama3-v0.35
pipeline_tag: text-generation
---
# QuantFactory/Halu-8B-Llama3-v0.35-GGUF
This is quantized version of [Hastagaras/Halu-8B-Llama3-v0.35](https://huggingface.co/Hastagaras/Halu-8B-Llama3-v0.35) created using llama.cpp
# Model Description
This model has a similar steps to the Halu 0.3 model, but it utilizes a different base model.
The structure is as follows:
```
Base Model: [Daredevil](https://huggingface.co/mlabonne/Daredevil-8B)
|
Adapter: [HaluStory]
|
Adapter: [HaluConversation]
```
The overall structure is essentially the same, with the primary difference being the base model used. While the 0.3 model employed the [Iterative-DPO](RLHFlow/LLaMA3-iterative-DPO-final) model, this particular model is built upon the [Daredevil](https://huggingface.co/mlabonne/Daredevil-8B) model.
Despite the change in the base model, the adapters remain the same. After some testing, the 0.3 version is better at following instructions.
NOTES: You can see the difference between the 0.3 and 0.35 when you use 0 temperature and ask, "Who developed you?"
This model's answer will be something like "Developed by Meta AI," while the 0.3 version will answer like this: "Developed by OpenAI based on GPT-3."
So, from now on, all the Halu arc that end with 0.x0 will be based on the SFR Iterative DPO, while the 0.x5 versions will be based on any Llama3 Instruct-based model (in this case, it is Daredevil).
On the other hand... the [Anjir Model](https://huggingface.co/Hastagaras/Anjir-8B-L3) responds like this: "Developed by Meta AI based on GPT-3." *Hehe, the baukit slerp surgery is a success, I guess.* |
bartowski/Tess-v2.5-Qwen2-72B-GGUF | bartowski | "2024-06-13T07:19:44Z" | 1,348 | 7 | null | [
"gguf",
"text-generation",
"license:other",
"region:us"
] | text-generation | "2024-06-13T05:37:53Z" | ---
license: other
license_name: qwen2
license_link: https://huggingface.co/Qwen/Qwen2-72B/blob/main/LICENSE
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Tess-v2.5-Qwen2-72B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3140">b3140</a> for quantization.
Original model: https://huggingface.co/migtissera/Tess-v2.5-Qwen2-72B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Tess-v2.5-Qwen2-72B-Q8_0.gguf](https://huggingface.co/bartowski/Tess-v2.5-Qwen2-72B-GGUF/tree/main/Tess-v2.5-Qwen2-72B-Q8_0.gguf) | Q8_0 | 77.26GB | Extremely high quality, generally unneeded but max available quant. |
| [Tess-v2.5-Qwen2-72B-Q5_K_M.gguf](https://huggingface.co/bartowski/Tess-v2.5-Qwen2-72B-GGUF/tree/main/Tess-v2.5-Qwen2-72B-Q5_K_M.gguf) | Q5_K_M | 54.44GB | High quality, *recommended*. |
| [Tess-v2.5-Qwen2-72B-Q4_K_M.gguf](https://huggingface.co/bartowski/Tess-v2.5-Qwen2-72B-GGUF/blob/main/Tess-v2.5-Qwen2-72B-Q4_K_M.gguf) | Q4_K_M | 47.41GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Tess-v2.5-Qwen2-72B-IQ4_XS.gguf](https://huggingface.co/bartowski/Tess-v2.5-Qwen2-72B-GGUF/blob/main/Tess-v2.5-Qwen2-72B-IQ4_XS.gguf) | IQ4_XS | 39.70GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Tess-v2.5-Qwen2-72B-Q3_K_M.gguf](https://huggingface.co/bartowski/Tess-v2.5-Qwen2-72B-GGUF/blob/main/Tess-v2.5-Qwen2-72B-Q3_K_M.gguf) | Q3_K_M | 37.69GB | Even lower quality. |
| [Tess-v2.5-Qwen2-72B-IQ3_M.gguf](https://huggingface.co/bartowski/Tess-v2.5-Qwen2-72B-GGUF/blob/main/Tess-v2.5-Qwen2-72B-IQ3_M.gguf) | IQ3_M | 35.50GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Tess-v2.5-Qwen2-72B-Q3_K_S.gguf](https://huggingface.co/bartowski/Tess-v2.5-Qwen2-72B-GGUF/blob/main/Tess-v2.5-Qwen2-72B-Q3_K_S.gguf) | Q3_K_S | 34.48GB | Low quality, not recommended. |
| [Tess-v2.5-Qwen2-72B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Tess-v2.5-Qwen2-72B-GGUF/blob/main/Tess-v2.5-Qwen2-72B-IQ3_XXS.gguf) | IQ3_XXS | 31.84GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Tess-v2.5-Qwen2-72B-Q2_K.gguf](https://huggingface.co/bartowski/Tess-v2.5-Qwen2-72B-GGUF/blob/main/Tess-v2.5-Qwen2-72B-Q2_K.gguf) | Q2_K | 29.81GB | Very low quality but surprisingly usable. |
| [Tess-v2.5-Qwen2-72B-IQ2_M.gguf](https://huggingface.co/bartowski/Tess-v2.5-Qwen2-72B-GGUF/blob/main/Tess-v2.5-Qwen2-72B-IQ2_M.gguf) | IQ2_M | 29.33GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Tess-v2.5-Qwen2-72B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Tess-v2.5-Qwen2-72B-GGUF/blob/main/Tess-v2.5-Qwen2-72B-IQ2_XXS.gguf) | IQ2_XXS | 25.49GB | Lower quality, uses SOTA techniques to be usable. |
| [Tess-v2.5-Qwen2-72B-IQ1_M.gguf](https://huggingface.co/bartowski/Tess-v2.5-Qwen2-72B-GGUF/blob/main/Tess-v2.5-Qwen2-72B-IQ1_M.gguf) | IQ1_M | 23.74GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Tess-v2.5-Qwen2-72B-GGUF --include "Tess-v2.5-Qwen2-72B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Tess-v2.5-Qwen2-72B-GGUF --include "Tess-v2.5-Qwen2-72B-Q8_0.gguf/*" --local-dir Tess-v2.5-Qwen2-72B-Q8_0
```
You can either specify a new local-dir (Tess-v2.5-Qwen2-72B-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
NikolayKozloff/llama3-tweety-8b-italian-Q8_0-GGUF | NikolayKozloff | "2024-06-25T01:34:05Z" | 1,348 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:RiTA-nlp/llama3-tweety-8b-italian",
"region:us"
] | null | "2024-06-25T01:33:28Z" | ---
base_model: RiTA-nlp/llama3-tweety-8b-italian
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/llama3-tweety-8b-italian-Q8_0-GGUF
This model was converted to GGUF format from [`RiTA-nlp/llama3-tweety-8b-italian`](https://huggingface.co/RiTA-nlp/llama3-tweety-8b-italian) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/RiTA-nlp/llama3-tweety-8b-italian) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/llama3-tweety-8b-italian-Q8_0-GGUF --hf-file llama3-tweety-8b-italian-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/llama3-tweety-8b-italian-Q8_0-GGUF --hf-file llama3-tweety-8b-italian-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/llama3-tweety-8b-italian-Q8_0-GGUF --hf-file llama3-tweety-8b-italian-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/llama3-tweety-8b-italian-Q8_0-GGUF --hf-file llama3-tweety-8b-italian-q8_0.gguf -c 2048
```
|
facebook/regnet-y-040 | facebook | "2023-03-26T11:21:20Z" | 1,347 | 1 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"regnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2022-03-18T15:36:08Z" | ---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("facebook/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet). |
digiplay/HIJKLMix_v2 | digiplay | "2023-08-18T16:22:13Z" | 1,347 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-08-17T17:00:38Z" | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
https://civitai.com/models/127898?modelVersionId=142508
Sample image generated with Google colab + diffusers :



|
heegyu/polyglot-ko-3.8b-chat | heegyu | "2023-09-20T01:09:40Z" | 1,347 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"dataset:dbdu/ShareGPT-74k-ko",
"dataset:heegyu/korquad-chat-v1",
"dataset:HAERAE-HUB/KoInstruct-QA",
"dataset:changpt/ko-lima-vicuna",
"dataset:nlpai-lab/kullm-v2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-21T04:09:50Z" | ---
datasets:
- beomi/KoAlpaca-v1.1a
- dbdu/ShareGPT-74k-ko
- heegyu/korquad-chat-v1
- HAERAE-HUB/KoInstruct-QA
- changpt/ko-lima-vicuna
- nlpai-lab/kullm-v2
language:
- ko
---
# heegyu/polyglot-ko-3.8b-chat
- [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b)를 여러 한국어 instruction 데이터셋으로 학습한 모델
## 사용한 데이터셋
| Dataset | # instance | 타입 |
| --- | --- | --- |
| [KoAlpaca v1.1](https://raw.githubusercontent.com/Beomi/KoAlpaca/main/KoAlpaca_v1.1.jsonl) | 50K | 싱글턴 |
| [dbdu/ShareGPT-74k-ko 의 part2_ko_uncleaned](https://huggingface.co/datasets/dbdu/ShareGPT-74k-ko/resolve/main/part2_ko_uncleaned.json) | 36K | 멀티턴 |
| [heegyu/korquad-chat-v1](https://huggingface.co/datasets/heegyu/korquad-chat-v1) | 9.6K | 멀티턴, 지식기반 |
| [lcw99/evolve-instruct](https://github.com/lcw99/evolve-instruct/) | 37K | 싱글턴 |
| [HAERAE-HUB/KoInstruct-QA](https://huggingface.co/datasets/HAERAE-HUB/KoInstruct-QA) | 50.3k | 싱글턴 |
| [changpt/ko-lima-vicuna](https://huggingface.co/datasets/changpt/ko-lima-vicuna) | 1K | 싱글턴, 멀티턴(극히 일부) |
| [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) | 15K | 싱글턴 |
- KULLM v2 데이터셋에서는 GPT4ALL, Dolly 데이터만 추출해서 사용했습니다.
- 다양한 학습 데이터셋은 [HeegyuKim/open-korean-instructions](https://github.com/HeegyuKim/open-korean-instructions) GitHub repository를 참고하세요.
## 생성 Prompt
- EOS token(<|endoftext|>)이 나올 때까지 생성하면 됩니다.
- 최상단 프롬프트는 있는게 좋은 답변이 더 자주 나오는 것 같아요.
```
당신은 AI 챗봇입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세하며 친절한 설명을 덧붙여서 작성하세요.
### 사용자:
서울에서 강릉 가려면 어떻게 가야해?
### 챗봇:
```
생성된 답변:
```
서울에서 강릉 가는 방법은 다양합니다. 비행기를 타고 가거나 KTX를 이용하거나 자가용으로 이동하는 방법이 있습니다.
1. 비행기: 비행기는 서울에서 양양 또는 원주 공항까지 운항합니다. 소요 시간은 약 2시간이며, 운임은 약 20,000원입니다.
2. KTX: KTX는 서울에서 강릉까지 운행하며, 요금은 일반 및 특실 모두 6,550원으로 통일되어 있습니다. 열차 시간은 서울에서 강릉까지 1시간 30분 정도 소요됩니다.
3. 자가용: 자가용을 이용해 강릉으로 가는 방법도 많이 있습니다. 서울에서 경포에 이르는 길은 다양하며, 도로 상황에 따라 주행 시간 및 이동 거리는 달라질 수 있습니다. 참고할 수 있는 정보는 강릉 관광안내소에서 받으실 수 있습니다.
위 방법을 이용하여 서울에서 강릉까지의 이동 시간과 비용을 고려했습니다. 그러나 교통 상황이나 도로 상황 등에 따라 추가 정보가 필요할 수 있습니다.
```
대화 예시 (with prompt)
```
당신은 AI 챗봇입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세하며 친절한 설명을 덧붙여서 작성하세요.
### 사용자:
중국의 여행지 추천해줘
### 챗봇:
아래는 중국의 다양한 여행지 중 몇 가지를 추천해드리겠습니다.
- 상하이: 중국의 상하이는 현대적인 건물과 높은 빌딩이 많은 도시입니다. 강을 따라 이동하는 선박은 상하이의 풍경과 문화를 감상할 수 있는 장소입니다. (사진: 중국 사진 제공 사이트)
- 치바오: 치바오는 상하이의 한 도시로, 독특한 건축물과 역사적인 유물들이 많이 있는 곳입니다. 아름다운 해변도 있으니 일정을 확인해보시면 좋을 것 같습니다. (사진: 중국 사진 제공 사이트)
- 항저우: 항저우는 중국의 4대 정원 중 하나로 유명한 곳입니다. 아름다운 정원, 오래된 건물, 그리고 풍부한 역사 문화 체험을 즐길 수 있는 도시입니다. (사진: 중국 사진 제공 사이트)
- 난징: 난징은 중국의 수도인 난징에 위치한 도시로, 역사와 문화를 기반으로 다양한 명소들을 찾을 수 있습니다. (사진: 중국 사진 제공 사이트)
- 베이징: 중국의 수도인 베이징은 중국의 문화와 역사를 체험할 수 있는 도시입니다. 다양한 관광 명소와 역사 유물들이 여러분을 기다리고 있을 것입니다. (사진: 중국 사진 제공 사이트)
- 베이징의 창춘: 창춘은 중국의 동북 지방에 위치한 도시로, 전통 문화와 역사를 체험할 수 있는 좋은 장소입니다.
이 외에도 중국 문화와 역사에 대한 자세한 정보는 위의 목록을 참고하여 다양한 매체에서 찾아보실 수 있습니다.
```
멀티턴 대화 예시 (with prompt)
```
당신은 AI 챗봇입니다. 사용자에게 도움이 되고 유익한 내용을 제공해야합니다. 답변은 길고 자세하며 친절한 설명을 덧붙여서 작성하세요.
### 사용자:
요즘 즐겨하는 컴퓨터 게임이 있어?
### 챗봇:
죄송하지만 질문에 대한 정확한 답변을 위해서는 게임의 유형과 장르를 더 자세히 알려주시는 것이 좋습니다.
### 사용자:
음 RPG 장르 쪽으로 추천받고 싶은데
### 챗봇:
RPG 장르 중에서도 간단한 캐주얼 게임을 선호하신다면, 제가 추천해드릴 수 있습니다. 바로 'Rogue Dragon(가칭)'이라는 게임입니다. 이 게임은 판타지, 액션 등 다양한 장르를 접목하여 즐길 수 있으며, 화려한 그래픽, 탄탄한 스토리에 중독성 강한 타격시스템도 특징입니다.
### 사용자:
어디서 할 수 있어?
### 챗봇:
'Rogue Dragon'은 스팀에서 무료로 플레이하실 수 있습니다. 다른 방법으로는, 공식 웹사이트에서 다운로드하셔서 플레이하실 수 있습니다.
``` |
haoranxu/ALMA-7B-Pretrain | haoranxu | "2024-01-18T04:05:14Z" | 1,347 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2309.11674",
"arxiv:2401.08417",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-17T17:42:40Z" | ---
license: mit
---
**ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance.
Please find more details in our [paper](https://arxiv.org/abs/2309.11674).
**[ALMA-R](https://arxiv.org/abs/2401.08417) (NEW!) is released now!** ALMA-R builds upon ALMA models, with further LoRA fine-tuning with our proposed **Contrastive Preference Optimization (CPO)** as opposed to the Supervised Fine-tuning used in ALMA. CPO fine-tuning requires our [triplet preference data](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) for preference learning. ALMA-R now can matches or even exceeds GPT-4 or WMT winners!
```
@misc{xu2023paradigm,
title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models},
author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla},
year={2023},
eprint={2309.11674},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
We release six translation models presented in the paper:
- **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
- **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
- **ALMA-7B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-7B-LoRA with contrastive preference optimization.
- **ALMA-13B**: Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data
- **ALMA-13B-LoRA** (Our best system): Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **LoRA** fine-tune on human-written parallel data
- **ALMA-13B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-13B-LoRA with contrastive preference optimization.
Model checkpoints are released at huggingface:
| Models | Base Model Link | LoRA Link |
|:-------------:|:---------------:|:---------:|
| ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - |
| ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) |
| **ALMA-7B-R (NEW!)** | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-R](https://huggingface.co/haoranxu/ALMA-7B-R) |
| ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
| ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
| **ALMA-13B-R (NEW!)** | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-R](https://huggingface.co/haoranxu/ALMA-13B-R) |
**Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models.**
Datasets used by ALMA and ALMA-R are also released at huggingface now (NEW!)
| Datasets | Train / Validation| Test |
|:-------------:|:---------------:|:---------:|
| Human-Written Parallel Data (ALMA) | [train and validation](https://huggingface.co/datasets/haoranxu/ALMA-Human-Parallel) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) |
| Triplet Preference Data | [train](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) and [WMT'23](https://huggingface.co/datasets/haoranxu/WMT23-Test) |
A quick start to use our best system (ALMA-13B-LoRA) for translation. An example of translating "我爱机器翻译。" into English:
```
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM
from transformers import LlamaTokenizer
# Load base model and LoRA weights
model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-Pretrain-LoRA")
tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left')
# Add the source setence into the prompt template
prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
# Translation
with torch.no_grad():
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(outputs)
```
Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA) |
beberik/Nyxene-11B | beberik | "2024-03-04T16:15:40Z" | 1,347 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-02T17:11:58Z" | ---
license: cc-by-nc-4.0
tags:
- merge
model-index:
- name: Nyxene-11B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.34
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.5
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.78
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-11B
name: Open LLM Leaderboard
---
## Description
This repo contains bf16 files of Nyxene-11B. Like [OmniMix](https://huggingface.co/Undi95/Mistral-11B-OmniMix) but with new models.
## Model used
- [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
- [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
- [fblgit/juanako-7b-UNA](https://huggingface.co/fblgit/juanako-7b-UNA)
- [ehartford/dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b)
## Prompt template
The best one after further testing is this one:
```
<|system|>
Below is an instruction that describes a task. Write a response that appropriately completes the request.
<|user|>
{prompt}
<|assistant|>
```
## The secret sauce
dolphin-juanako-11B :
```
slices:
- sources:
- model: fblgit/juanako-7b-UNA
layer_range: [0, 24]
- sources:
- model: ehartford/dolphin-2.1-mistral-7b
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Starling-NeuralHermes-11B :
```
slices:
- sources:
- model: berkeley-nest/Starling-LM-7B-alpha
layer_range: [0, 24]
- sources:
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Nyxene-11B :
```
slices:
- sources:
- model: dolphin-juanako-11B
layer_range: [0, 48]
- model: Starling-NeuralHermes-11B
layer_range: [0, 48]
merge_method: slerp
base_model: dolphin-juanako-11B
parameters:
t:
- filter: lm_head
value: [0.75]
- filter: embed_tokens
value: [0.75]
- filter: self_attn
value: [0.75, 0.25]
- filter: mlp
value: [0.25, 0.75]
- filter: layernorm
value: [0.5, 0.5]
- filter: modelnorm
value: [0.75]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beberik__Nyxene-11B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.72|
|AI2 Reasoning Challenge (25-Shot)|68.34|
|HellaSwag (10-Shot) |84.54|
|MMLU (5-Shot) |65.09|
|TruthfulQA (0-shot) |57.50|
|Winogrande (5-shot) |79.08|
|GSM8k (5-shot) |51.78|
|
luffycodes/vicuna-class-shishya-ac-hal-7b-ep3 | luffycodes | "2023-12-15T08:24:12Z" | 1,347 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2305.13272",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-15T08:13:08Z" | ---
license: llama2
---
If you use this work, please cite:
CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning Science Principles
https://arxiv.org/abs/2305.13272
```
@misc{sonkar2023class,
title={CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning Science Principles},
author={Shashank Sonkar and Lucy Liu and Debshila Basu Mallick and Richard G. Baraniuk},
year={2023},
eprint={2305.13272},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
We-Want-GPU/Yi-Ko-6B-orca-alpaca-gpt4-math-lora-DPO | We-Want-GPU | "2023-12-21T01:26:08Z" | 1,347 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-21T01:22:16Z" | Entry not found |
lorinma/yi6B_Vicuna | lorinma | "2024-04-29T08:39:49Z" | 1,347 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-08T01:40:11Z" | ---
language:
- en
license: mit
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
model-index:
- name: yi6B_Vicuna
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 46.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lorinma/yi6B_Vicuna
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 69.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lorinma/yi6B_Vicuna
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lorinma/yi6B_Vicuna
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 48.11
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lorinma/yi6B_Vicuna
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lorinma/yi6B_Vicuna
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 18.42
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lorinma/yi6B_Vicuna
name: Open LLM Leaderboard
---
**Bug**: Having a bit issue with the tokenizer, still figuring out...You can use the original Yi tokenizer configuratin.
Reproduce Vicuna, but based on yi-6B. The training data I used was ShareGPT_V3_unfiltered_cleaned_split_no_imsorry.json.
The training framework I used https://github.com/shibing624/MedicalGPT , train shell:
```
CUDA_VISIBLE_DEVICES=0,1,2,3,5 torchrun --nproc_per_node 5 ../supervised_finetuning.py \
--model_type auto \
--model_name_or_path /data/llm/models/Pretrained/yi-6B/01ai/Yi-6B \
--tokenizer_name_or_path /data/llm/models/Pretrained/yi-6B/01ai/Yi-6B \
--train_file_dir ../data/finetune/vicuna/ \
--per_device_train_batch_size 2\
--do_train \
--max_train_samples -1 \
--num_train_epochs 3 \
--learning_rate 2e-5 \
--weight_decay 0. \
--bf16 \
--use_peft False \
--logging_strategy steps \
--logging_steps 10 \
--save_strategy epoch \
--save_total_limit 5 \
--gradient_accumulation_steps 1 \
--preprocessing_num_workers 8 \
--output_dir ../outputs/20240106_yi6B_vicuna \
--overwrite_output_dir \
--ddp_timeout 30000 \
--logging_first_step True \
--torch_dtype bfloat16 \
--device_map auto \
--report_to tensorboard \
--ddp_find_unused_parameters False \
--gradient_checkpointing True \
--cache_dir ./cache \
--model_max_length 4096 \
--deepspeed ../deepspeed_zero_stage2_config_no16.json \
--template_name yi
```
The training used 5*A800 for 3 epochs
```
***** train metrics *****
epoch = 3.0
train_loss = 0.3785
train_runtime = 1 day, 10:01:13.95
train_samples = 93204
train_samples_per_second = 2.24
train_steps_per_second = 0.224
```
Post-training inference is also using this repository:
```
CUDA_VISIBLE_DEVICES=4 python gradio_demo.py --model_type auto --base_model /data/mn/shibing624/MedicalGPT-1.6.3-231215/outputs/20240106_yi6B_vicuna --tokenizer_path /data/mn/shibing624/MedicalGPT-1.6.3-231215/outputs/20240106_yi6B_vicuna --template_name yi --gpus 4
CUDA_VISIBLE_DEVICES=6 python inference.py --model_type auto --base_model /data/mn/shibing624/MedicalGPT-1.6.3-231215/outputs/20240106_yi6B_vicuna --template_name yi --gpus 6 --interactive --tokenizer_path /data/llm/models/Pretrained/yi-6B/01ai/Yi-6B
```
We can see from some preliminary results, the conversation is natural and informative (unsurprisingly).

Also we observe the unfiltering seems to be working! **Heads up** some examples are unsafe and inappropriate, this is entirely for research purposes, to test how alignment-filtered SFT data affect LLM's final output.


**Update:** Evaluate on Open LLM Leaderboard:

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lorinma__yi6B_Vicuna)
| Metric |Value|
|---------------------------------|----:|
|Avg. |51.02|
|AI2 Reasoning Challenge (25-Shot)|46.16|
|HellaSwag (10-Shot) |69.30|
|MMLU (5-Shot) |58.43|
|TruthfulQA (0-shot) |48.11|
|Winogrande (5-shot) |65.67|
|GSM8k (5-shot) |18.42|
|
TeeZee/2xbagel-dpo-34b-v0.2 | TeeZee | "2024-06-25T19:12:15Z" | 1,347 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-11T01:48:24Z" | ---
tags:
- merge
model-index:
- name: 2xbagel-dpo-34b-v0.2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.27
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/2xbagel-dpo-34b-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 79.35
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/2xbagel-dpo-34b-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/2xbagel-dpo-34b-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.15
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/2xbagel-dpo-34b-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/2xbagel-dpo-34b-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 2.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/2xbagel-dpo-34b-v0.2
name: Open LLM Leaderboard
license: apache-2.0
---
# Bagel DPO 57B

## Model Details
- A result of interleaving layers of [jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) with itself.
- The resulting model has 100 layers and approximately 57 billion parameters.
- See [mergekit-config.yml](https://huggingface.co/TeeZee/2xbagel-dpo-34b-v0.2/blob/main/mergekit-config.yml) for details on the merge method used.
**Warning: This model can produce NSFW content!**
## Results
Bigger version of original, uncensored like oryginal. All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__2xbagel-dpo-34b-v0.2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |60.66|
|AI2 Reasoning Challenge (25-Shot)|65.27|
|HellaSwag (10-Shot) |79.35|
|MMLU (5-Shot) |73.64|
|TruthfulQA (0-shot) |67.15|
|Winogrande (5-shot) |76.40|
|GSM8k (5-shot) | 2.12|
|
proto-llm/uniwiz-7B-v0.2 | proto-llm | "2024-01-11T10:30:48Z" | 1,347 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-11T10:22:31Z" | ---
license: apache-2.0
---
## **Model Overview:**
- **Model Name:** UniWiZ-7B-v0.2
- **Architecture:** Mistral-7B
- **Training Objective:** Knowledge and Safety Orchestration
- **Training Dataset:** Curated dataset encompassing diverse knowledge domains and safety-focused content
- **Training Duration:** [Specify training duration]
## **Intended Use:**
UniWiZ-7B-v0.1 is designed for various natural language understanding tasks, including but not limited to text generation, summarization, question-answering, and conversation. Its training data emphasizes a broad spectrum of knowledge domains while incorporating safety considerations to ensure responsible and ethical use.
## **Scope of Applications:**
UniWiZ-7B-v0.1 can be employed across a wide range of applications such as:
1. **Content Generation:** Creating human-like text for articles, blogs, creative writing, etc.
2. **Summarization:** Condensing lengthy texts into concise summaries while preserving key information.
3. **Question-Answering:** Responding to user queries by extracting relevant information from its extensive knowledge base.
4. **Conversational Agents:** Engaging in natural and contextually relevant conversations with users.
5. **Educational Assistance:** Providing explanations, definitions, and insights on various topics.
## **Data and Training:**
UniWiZ-7B-v0.1 was trained on a diverse dataset encompassing knowledge from different domains. The training process included safety orchestration to mitigate biases and ensure ethical AI behavior. The model's architecture, Mistral-7B, enables it to understand and generate coherent and contextually relevant text.
## **Performance and Limitations:**
While UniWiZ-7B-v0.1 demonstrates strong performance across a variety of tasks, it may exhibit limitations in:
1. **Handling Uncommon or Specialized Topics:** The model's knowledge is extensive but may not cover extremely niche or specialized subjects.
2. **Sensitive Content:** Despite safety measures, there is a possibility of generating content that may be considered inappropriate or offensive.
Users are encouraged to exercise discretion and provide feedback to improve the model's performance and address any potential biases or shortcomings.
## **Ethical Considerations:**
UniWiZ-7B-v0.1 is developed with ethical AI principles in mind. Proto-AI is committed to addressing concerns related to bias, fairness, and the responsible use of AI technology. Users are encouraged to report unintended behavior or bias for continuous improvement.
## **Future Updates:**
Proto-AI is dedicated to refining and enhancing UniWiZ-7B-v0.1. Regular updates will be released to improve performance, address user feedback, and incorporate the latest advancements in AI research.
This model card is a reference for users to understand UniWiZ-7B-v0.1's capabilities, limitations, and ethical considerations. Proto-AI values transparency and accountability in the deployment and use of AI models. More details about the model and training will be released later.
|
Weyaxi/Cosmosis-3x34B | Weyaxi | "2024-01-17T08:30:26Z" | 1,347 | 9 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"yi",
"moe",
"conversational",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-12T14:44:03Z" | ---
license: other
tags:
- yi
- moe
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
model-index:
- name: Cosmosis-3x34B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Cosmosis-3x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.18
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Cosmosis-3x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.25
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Cosmosis-3x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.82
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Cosmosis-3x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Cosmosis-3x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.25
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/Cosmosis-3x34B
name: Open LLM Leaderboard
---

# Cosmosis-3x34B
This is the model for Cosmosis-3x34B. I used [this repo](https://bit.ly/weyaxi-moe-repo) to make this MOE model.
# Prompt Template(s):
Since [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) uses many prompt templates, you can utilize prompt templates provided by bagel and other expert's prompt templates.
**Note:** I currently do not know which prompt template is best.
### ChatML:
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
### Human Asistant
```
Human: {user}
### Assistant: {asistant}
```
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system}
{instruction}
### Response:
```
### Vicuna
```
{system}
USER: {instruction}
ASSISTANT:
```
Visit [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) to try more prompt templates.
# Yaml Config to reproduce
```yaml
base_model: nontoxic-bagel-34b-v0.2
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: bagel-dpo-34b-v0.2
positive_prompts: ["question answering", "Q:", science", "biology", "chemistry", "physics"]
negative_prompts: ["math", "reason", "mathematics", "solve", "count", "code", "python", "javascript", "programming", "algorithm"]
- source_model: Nous-Hermes-2-Yi-34B
positive_prompts: ["chat", "math", "reason", "mathematics", "solve", "count", "python", "javascript", "programming", "algorithm", "tell me", "assistant"]
- source_model: SUS-Chat-34B
positive_prompts: ["math", "reason", "mathematics", "solve", "count", "assistant"]
```
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/Cosmosis-3x34B-GPTQ](https://huggingface.co/TheBloke/Cosmosis-3x34B-GPTQ)
##### GGUF
- [TheBloke/Cosmosis-3x34B-GGUF](https://huggingface.co/TheBloke/Cosmosis-3x34B-GGUF)
##### AWQ
- [TheBloke/Cosmosis-3x34B-AWQ](https://huggingface.co/TheBloke/Cosmosis-3x34B-AWQ)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Cosmosis-3x34B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.39|
|AI2 Reasoning Challenge (25-Shot)|69.71|
|HellaSwag (10-Shot) |85.18|
|MMLU (5-Shot) |77.25|
|TruthfulQA (0-shot) |63.82|
|Winogrande (5-shot) |84.14|
|GSM8k (5-shot) |72.25|
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi) |
cookinai/Bald-Eagle-7B | cookinai | "2024-01-17T18:31:50Z" | 1,347 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-17T03:11:35Z" | ---
license: cc-by-nc-nd-4.0
---
# Bald Eagle 7B

Fine-tune of [fblgit/UNA-TheBeagle-7b-v1](https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1) with:
[https://huggingface.co/datasets/cognitivecomputations/dolphin](https://huggingface.co/datasets/cognitivecomputations/dolphin)
[https://huggingface.co/datasets/Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
[https://huggingface.co/datasets/Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
Attempt to make a well optimized chat model by combining these high preforming Orca-inspired datasets |
sonthenguyen/NeuralHermes-2.5-Mistral-7B | sonthenguyen | "2024-01-19T22:26:58Z" | 1,347 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-18T19:15:08Z" | ---
license: apache-2.0
---
# Model Card for Model ID
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- dpo
- rlhf
license: apache-2.0
language:
- en
datasets:
- mlabonne/chatml_dpo_pairs
---
|
yunconglong/7Bx4_DPO_2e | yunconglong | "2024-01-20T03:15:55Z" | 1,347 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-20T02:49:29Z" | ---
license: mit
---
* [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer) with dataset jondurbin/truthy-dpo-v0.1
```
DPO Trainer
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.
```
```
"num_experts_per_tok": 2
``` |
LordNoah/Alpaca-tuned-gpt2 | LordNoah | "2024-01-22T04:43:22Z" | 1,347 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:tatsu-lab/alpaca",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-22T02:41:39Z" | ---
license: apache-2.0
datasets:
- tatsu-lab/alpaca
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources gpt2-large
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
This is a gpt-2 large model trained on Alpaca dataset for 2 epochs.
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LordNoah/Alpaca_spin_gpt2_e0_se1 | LordNoah | "2024-01-22T15:01:35Z" | 1,347 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-22T14:46:10Z" | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
spin based alpaca trained gpt2
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75 | lightblue | "2024-05-30T09:57:29Z" | 1,347 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"arxiv:2405.18952",
"base_model:lightblue/suzume-llama-3-8B-multilingual",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-26T04:50:46Z" | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
base_model: lightblue/suzume-llama-3-8B-multilingual
model-index:
- name: workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_top75_borda
results: []
---
# Suzume ORPO
<p align="center">
<img width=500 src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/kWQSu02YfgYdUQqv4s5lq.png" alt="Suzume with Mitsu - a Japanese tree sparrow with honey on it"/>
</p>
[[Paper]](https://arxiv.org/abs/2405.18952) [[Dataset]](https://huggingface.co/datasets/lightblue/mitsu)
This is Suzume ORPO, an ORPO trained fine-tune of the [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) model using our [lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu) dataset.
We have trained several versions of this model using ORPO and so recommend that you use the best performing model from our tests, [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half).
Note that this model has a non-commerical license as we used the Command R and Command R+ models to generate our training data for this model ([lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu)).
We are currently working on a developing a commerically usable model, so stay tuned for that!
# Model list
We have ORPO trained the following models using different proportions of the [lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu) dataset:
* Trained on the top/bottom responses of all prompts in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full)
* Trained on the top/bottom responses of the prompts of the 75\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75)
* Trained on the top/bottom responses of the prompts of the 50\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half)
* Trained on the top/bottom responses of the prompts of the 25\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25)
# Model results
We compare the MT-Bench scores across 6 languages for our 4 ORPO trained models, as well as some baselines:
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - The foundation model that our models are ultimately built upon
* [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) - The highest performing open model on the Chatbot arena that is of a similar size to ours
* gpt-3.5-turbo - A fairly high quality (although not state-of-the-art) proprietary LLM
* [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) - The base model which we train our ORPO finetunes from
| **MT-Bench language** | **meta-llama/Meta-Llama-3-8B-Instruct** | **Nexusflow/Starling-LM-7B-beta** | **gpt-3.5-turbo** | **lightblue/suzume-llama-3-8B-multilingual** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25** |
|-----------------------|-----------------------------------------|-----------------------------------|-------------------|----------------------------------------------|--------------------------------------------------------------|---------------------------------------------------------------|--------------------------------------------------------------|---------------------------------------------------------------|
| **Chinese 🇨🇳** | NaN | 6.97 | 7.55 | 7.11 | 7.65 | **7.77** | 7.74 | 7.44 |
| **English 🇺🇸** | 7.98 | 7.92 | **8.26** | 7.73 | 7.98 | 7.94 | 7.98 | 8.22 |
| **French 🇫🇷** | NaN | 7.29 | 7.74 | 7.66 | **7.84** | 7.46 | 7.78 | 7.81 |
| **German 🇩🇪** | NaN | 6.99 | 7.68 | 7.26 | 7.28 | 7.64 | 7.7 | **7.71** |
| **Japanese 🇯🇵** | NaN | 6.22 | **7.84** | 6.56 | 7.2 | 7.12 | 7.34 | 7.04 |
| **Russian 🇷🇺** | NaN | 8.28 | 7.94 | 8.19 | 8.3 | 8.74 | **8.94** | 8.81 |
We can see noticable improvement on most languages compared to the base model. We also find that our ORPO models achieve the highest score out of all the models we evaluated for a number of languages.
# Training data
We trained this model using the [lightblue/mitsu_full_borda](https://huggingface.co/datasets/lightblue/mitsu_full_borda) dataset.
# Training configuration
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: lightblue/suzume-llama-3-8B-multilingual
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast
load_in_8bit: false
load_in_4bit: false
strict: false
rl: orpo
orpo_alpha: 0.1
remove_unused_columns: false
chat_template: chatml
datasets:
- path: lightblue/mitsu_top75_borda
type: orpo.chat_template
conversation: llama-3
dataset_prepared_path: /workspace/llm_training/axolotl/llama3-multilingual-orpo/prepared_mitsu_top75_borda
val_set_size: 0.02
output_dir: /workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_top75_borda
sequence_len: 8192
sample_packing: false
pad_to_sequence_len: true
use_wandb: true
wandb_project: axolotl
wandb_entity: peterd
wandb_name: mitsu_top75_borda
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 8e-6
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 20
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.0
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_top75_borda
This model is a fine-tuned version of [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.6309 | 0.02 | 1 | 7.7104 |
| 3.9307 | 0.06 | 4 | 2.3582 |
| 0.1361 | 0.13 | 8 | 0.1163 |
| 0.1072 | 0.19 | 12 | 0.1045 |
| 0.1087 | 0.26 | 16 | 0.1007 |
| 0.1109 | 0.32 | 20 | 0.0971 |
| 0.1015 | 0.39 | 24 | 0.0908 |
| 0.1032 | 0.45 | 28 | 0.0872 |
| 0.0996 | 0.52 | 32 | 0.0968 |
| 0.1107 | 0.58 | 36 | 0.0982 |
| 0.1079 | 0.65 | 40 | 0.0911 |
| 0.1011 | 0.71 | 44 | 0.0893 |
| 0.1251 | 0.78 | 48 | 0.0866 |
| 0.1008 | 0.84 | 52 | 0.0863 |
| 0.0948 | 0.91 | 56 | 0.0863 |
| 0.0936 | 0.97 | 60 | 0.0863 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
# How to cite
```tex
@article{devine2024sure,
title={Are You Sure? Rank Them Again: Repeated Ranking For Better Preference Datasets},
author={Devine, Peter},
journal={arXiv preprint arXiv:2405.18952},
year={2024}
}
```
# Developer
Peter Devine - ([ptrdvn](https://huggingface.co/ptrdvn)) |
kyujinpy/PlatYi-34B-Llama-Q | kyujinpy | "2024-03-04T12:09:29Z" | 1,346 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-01T19:30:35Z" | ---
language:
- en
license: cc-by-nc-sa-4.0
library_name: transformers
datasets:
- garage-bAInd/Open-Platypus
pipeline_tag: text-generation
model-index:
- name: PlatYi-34B-Llama-Q
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.22
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.78
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.64
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.03
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.42
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Llama-Q
name: Open LLM Leaderboard
---
# **PlatYi-34B-Llama-Q**
<img src='./PlatYi.png' width=256>
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
PlatYi-34B-Llama-Q is an auto-regressive language model based on the Yi-34B transformer architecture.
**Blog Link**
Blog: [Coming soon...]
Github: [Coming soon...]
**Base Model**
[chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama)
**Training Dataset**
[garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
## Notice
While training, I used Q-LoRA.
The lora_r values is 64.
# **Model Benchmark**
## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| **PlatYi-34B-Llama-Q** | 71.13 | 65.70 | 85.22 | 78.78 | 53.64 | 83.03 | 60.42 |
| PlatYi-34B-Llama | 68.37 | 67.83 | 85.35 | 78.26 | 53.46 | 82.87 | 42.46 |
| [Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) | 70.95 | 64.59 | 85.63 | 76.31 | 55.60 | 82.79 | 60.80 |
| [Yi-34B](https://huggingface.co/01-ai/Yi-34B) | 69.42 | 64.59 | 85.69 | 76.35 | 56.23 | 83.03 | 50.64 |
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/PlatYi-34B-Llama-Q"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__PlatYi-34B-Llama-Q)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.13|
|AI2 Reasoning Challenge (25-Shot)|65.70|
|HellaSwag (10-Shot) |85.22|
|MMLU (5-Shot) |78.78|
|TruthfulQA (0-shot) |53.64|
|Winogrande (5-shot) |83.03|
|GSM8k (5-shot) |60.42|
|
BM-K/mistral-ko-7b-it-v2.0.1 | BM-K | "2023-12-26T12:33:54Z" | 1,346 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-26T12:25:16Z" | Entry not found |
alnrg2arg/test2_3 | alnrg2arg | "2024-01-24T14:27:11Z" | 1,346 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralBeagle14-7B",
"abideen/NexoNimbus-7B",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-17T05:31:32Z" | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
- abideen/NexoNimbus-7B
---
# test2_3
test2_3 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [abideen/NexoNimbus-7B](https://huggingface.co/abideen/NexoNimbus-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
- model: abideen/NexoNimbus-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralBeagle14-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
lodrick-the-lafted/Grafted-Llama2-2x70B | lodrick-the-lafted | "2024-01-29T21:09:03Z" | 1,346 | 3 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-19T19:34:57Z" | ---
license: llama2
tags:
- moe
- merge
---
<img src=https://huggingface.co/lodrick-the-lafted/Grafted-Llama2-2x70B/resolve/main/gl.png>
The Llamas are WinterGoddess + AuroraNights.
This is yet another mergekit abomination.
This is probably more of a "dense" MoE than a sparse one.
Unfortunately, most of the testing I have tried with this model shows it works well for a couple sentences, then it starts spouting gibberish. Don't waste your bandwidth.
<br/>
<br/>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lodrick-the-lafted__Grafted-Llama2-2x70B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.77|
|AI2 Reasoning Challenge (25-Shot)|72.61|
|HellaSwag (10-Shot) |89.57|
|MMLU (5-Shot) |71.67|
|TruthfulQA (0-shot) |66.49|
|Winogrande (5-shot) |84.37|
|GSM8k (5-shot) |57.92|
|
yunconglong/Truthful_DPO_MOE_19B | yunconglong | "2024-01-21T02:07:58Z" | 1,346 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"DPO",
"RL-TUNED",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-21T01:59:18Z" | ---
license: other
tags:
- moe
- DPO
- RL-TUNED
---
* [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer) with dataset jondurbin/truthy-dpo-v0.1
```
DPO Trainer
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.
```
|
PetroGPT/Severus-7B-DPO | PetroGPT | "2024-01-21T06:02:55Z" | 1,346 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-21T05:57:43Z" | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
adamo1139/Yi-34B-200K-rawrr1-LORA-DPO-experimental-r3 | adamo1139 | "2024-05-27T21:29:22Z" | 1,346 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dpo",
"qlora",
"unsloth",
"dataset:adamo1139/rawrr_v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-01-22T21:57:11Z" | ---
license: apache-2.0
datasets:
- adamo1139/rawrr_v1
tags:
- dpo
- qlora
- unsloth
---
Another QLoRA DPO training of Yi-34B-200K.
This time with sequence length 500, lora_r 16 and lora alpha 32.
I was able to squeeze that in using Unsloth, script I used is in this repo.
It definitely has much stronger effect than my previous one that was with lora_r 4, lora_alpha 8 and sequence length 200, but I am not sure if I didn't overcook it.
Will try to train this on AEZAKMI v2 now.
Credits for mlabonne (I was using his Mistral fine-tuning script pieces for dataset preparation), Daniel Han and Michael Han (Unsloth AI team)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" alt="made with Unsloth" width="400" height="64"/>](https://github.com/unslothai/unsloth) |
ggml-org/gemma-1.1-2b-it-Q8_0-GGUF | ggml-org | "2024-04-05T08:40:45Z" | 1,346 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"region:us"
] | null | "2024-04-03T11:06:51Z" | ---
tags:
- llama-cpp
- gguf-my-repo
---
# reach-vb/gemma-1.1-2b-it-Q8_0-GGUF
This model was converted to GGUF format from [`gg-hf/gemma-1.1-2b-it`](https://huggingface.co/gg-hf/gemma-1.1-2b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/gg-hf/gemma-1.1-2b-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo reach-vb/gemma-1.1-2b-it-Q8_0-GGUF --model gemma-1.1-2b-it.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo reach-vb/gemma-1.1-2b-it-Q8_0-GGUF --model gemma-1.1-2b-it.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gemma-1.1-2b-it.Q8_0.gguf -n 128
```
|
LeroyDyer/Mixtral_AI_CyberTron | LeroyDyer | "2024-04-12T08:16:08Z" | 1,346 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:LeroyDyer/Mixtral_AI_CyberTron_M4",
"base_model:LeroyDyer/Mixtral_AI_CyberTron_M3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-09T20:19:29Z" | ---
base_model:
- LeroyDyer/Mixtral_AI_CyberTron_M4
- LeroyDyer/Mixtral_AI_CyberTron_M3
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* Mixtral_AI_MegaTron
* [LeroyDyer/Mixtral_AI_CyberTron_M4](https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron_M4)
* [LeroyDyer/Mixtral_AI_CyberTron_M3](https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron_M3)
* Mixtral_AI_AlphaTron
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Mixtral_AI_AlphaTron
parameters:
weight: 0.512
- model: LeroyDyer/Mixtral_AI_CyberTron_M4
parameters:
weight: 0.256
- model: Mixtral_AI_MegaTron
parameters:
weight: 0.512
- model: LeroyDyer/Mixtral_AI_CyberTron_M3
parameters:
weight: 0.256
merge_method: linear
dtype: float16
```
|
Felladrin/gguf-Qwen2-1.5B-Instruct | Felladrin | "2024-06-07T09:25:23Z" | 1,346 | 0 | null | [
"gguf",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2024-06-07T08:45:41Z" | ---
license: apache-2.0
base_model: Qwen/Qwen2-1.5B-Instruct
---
GGUF version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct).
|
John6666/mala-anime-mix-nsfw-pony-xl-v5-sdxl-spo | John6666 | "2024-06-29T00:56:26Z" | 1,346 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"SPO",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-06-29T00:51:31Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
- SPO
---
Original model is [here](https://civitai.com/models/442163/mala-anime-mix-nsfw-ponyxl?modelVersionId=604755).
|
timm/tiny_vit_21m_224.dist_in22k | timm | "2023-09-01T18:12:50Z" | 1,345 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-22k",
"arxiv:2207.10666",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-09-01T16:04:36Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-22k
---
# Model card for tiny_vit_21m_224.dist_in22k
A TinyViT image classification model. Pretrained on ImageNet-22k with distillation by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 33.2
- GMACs: 4.1
- Activations (M): 16.0
- Image size: 224 x 224
- **Papers:**
- TinyViT: Fast Pretraining Distillation for Small Vision Transformers: https://arxiv.org/abs/2207.10666
- **Original:** https://github.com/microsoft/Cream/tree/main/TinyViT
- **Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('tiny_vit_21m_224.dist_in22k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tiny_vit_21m_224.dist_in22k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 576, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'tiny_vit_21m_224.dist_in22k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 576, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@InProceedings{tiny_vit,
title={TinyViT: Fast Pretraining Distillation for Small Vision Transformers},
author={Wu, Kan and Zhang, Jinnian and Peng, Houwen and Liu, Mengchen and Xiao, Bin and Fu, Jianlong and Yuan, Lu},
booktitle={European conference on computer vision (ECCV)},
year={2022}
}
```
|
L-R/LLmRa-2.7B | L-R | "2023-11-28T16:03:23Z" | 1,345 | 1 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"AI",
"ConversationalAI",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-10T13:42:54Z" | ---
license: other
language:
- en
pipeline_tag: conversational
inference: false
tags:
- AI
- ConversationalAI
---
<h1 style="text-align: center">LLmRa-2.7B</h1>
<h2 style="text-align: center">A conversational Open Pre-trained Transformer Language Model fine-tune.</h2>
**LLmRa 2.7B**, as a proof-of-concept fine-tune of [facebook/opt-2.7b](https://huggingface.co/facebook/opt-2.7b) optimized for dialogue.
**Disclaimer:** NSFW data was included in the fine-tuning of this model. Although SFW inputs will usually result in SFW outputs, you are advised to **chat at your own risk. This model is not suitable for use by minors.**
**Warning:** This model is **NOT** suitable for use by minors. **It will output X-rated content under certain circumstances.**
**This model is fine-tuned on a small-testing dataset, version 2 or a higher parameter model will contain the full dataset.**
---
## Usage Format
To effectively utilize the model, follow this structured format for engaging text-based conversations:
**1. Initialization**
Here is how you can define the personality of the language model:
```
<|system|>[Persona]
```
- **Persona**: You can define a specific persona or context for the AI, but it's optional. It can be a character, a role, or just a style of interaction.
**2. AI Introduction**
```
<|user|>[User input]<|model|>
```
- Users can start the conversation by entering their message within `<|user|>` and closing with `<|model|>`.
---
### Example Usage:
Here's an example of how to start a conversation with the AI:
```
<|system|>I'm here to provide information and assistance on a wide range of topics.
<|model|>Hello! Welcome to our AI-powered assistant. How can I assist you today?
<|user|>Tell me about the history of artificial intelligence.
<|model|>
```
Continue the conversation as needed. This structured format helps maintain a smooth and engaging interaction with the AI.
You are not required to include `User`, you can change it to your prefered name or leave it blank You may also add the AI name, example:
```
<|user|>YourNameHere: Hello.<|model|>CharacterName:
```
You can also use this instruct prompt example:
```
<|system|>What is one plus one?<|model|>
```
## Loading The Model
To use the model and interact with it, use the Python code below:
```Python
from transformers import (AutoModelForCausalLM,
AutoTokenizer,
pipeline,
)
model = AutoModelForCausalLM.from_pretrained('L-R/LLmRa-2.7B')
tokenizer = AutoTokenizer.from_pretrained('L-R/LLmRa-2.7B')
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=100)
input_question = 'QUESTION HERE'
question_formatted = f'<|system|>{input_question}<|model|>'
result = pipe(question_formatted)
print(f"[model]: {result[0]['generated_text'][len(question_formatted):]}")
```
Or the more complex one:
```Python
import os
import random
import sys
import time
import json
import torch
from transformers import (AutoTokenizer,
AutoModelForCausalLM,
BitsAndBytesConfig,
set_seed)
local_rank = int(os.getenv('LOCAL_RANK', '0'))
world_size = int(os.getenv('WORLD_SIZE', '1'))
local_tokenizer = bool(os.getenv('TOKENIZERS_PARALLELISM', 'false'))
class Chatbot:
def __init__(self, config):
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.tokenizer = None
self.config = config
self.persona = None
self.model = None
self.history = []
self.load_model()
def create_persona(self, persona_data):
required_keys = ['name', 'description', 'greeting']
if not all(key in persona_data for key in required_keys):
raise ValueError(
"Missing required keys in persona_data. Please provide 'name', 'description', and 'greeting'.")
new_persona_id = str(max(int(key) for key in self.config["personas"].keys()) + 1)
self.config["personas"][new_persona_id] = persona_data
return new_persona_id
def load_model(self):
model_path = self.config["model_path"]
tokenizer_path = self.config["tokenizer_path"]
quantization_config = BitsAndBytesConfig(
load_in_4bit= self.config['load_model_4bit'],
bnb_4bit_quant_type='nf4' if self.config['load_model_4bit'] else None,
bnb_4bit_compute_dtype=torch.float16 if self.config['load_model_4bit'] else None,
bnb_4bit_use_double_quant=True if self.config['load_model_4bit'] else None,
load_in_8bit=self.config['load_model_8bit'],
bnb_8bit_quant_type='nf4' if self.config['load_model_8bit'] else None,
bnb_8bit_compute_dtype=torch.float16 if self.config['load_model_8bit'] else None,
bnb_8bit_use_double_quant=True if self.config['load_model_8bit'] else None,
)
if not model_path or not tokenizer_path:
raise ValueError('model_name or tokenizer_path name not found! Define one.')
if self.config['load_model_4bit'] and self.config['load_model_8bit']:
raise ValueError("You can't load the model in 8 bits and 4 bits at the same time!")
if not self.config['user_name']:
print('You have not selected a name! No name will be send to the model.')
print(f"\nLoading model: {model_path}")
if torch.cuda.is_available():
self.model = AutoModelForCausalLM.from_pretrained(
model_path,
use_auth_token=self.config['model_token'],
quantization_config=quantization_config,)
if torch.cuda.device_count() > 1:
self.model = torch.nn.DataParallel(self.model)
model_running_on = f'{torch.cuda.device_count()} GPUs'
else:
model_running_on = '1 GPU'
else:
self.model = AutoModelForCausalLM.from_pretrained(
model_path,
quantization_config=quantization_config,
use_auth_token=self.config['model_token']).to(
self.device
)
model_running_on = 'CPU'
print(f'Model is running on: {model_running_on}')
self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_path, use_auth_token=self.config['model_token'])
print(self.tokenizer)
def load_persona(self, persona_id):
personas = self.config["personas"]
if persona_id in personas:
self.persona = personas[persona_id]
else:
raise ValueError("Invalid persona ID")
def formatting_question(self, user_input, history):
config_user = self.config['use_names']['user']
config_model = self.config['use_names']['model']
config_question = self.config['use_question_template']
if config_question:
formatted_answer = (
f'<|system|>{user_input}<|model|>'
)
else:
m_ = self.persona["description"]
g_ = self.persona["greeting"]
n_ = self.persona["name"]
un_ = self.config["user_name"]
if config_user and config_model:
formatted_answer = (
f'<|system|>{m_}<|model|>{n_}: {g_}{history}<|user|>{un_}: {user_input}<|model|>{n_}:'
)
elif config_user:
formatted_answer = (
f'<|system|>{m_}<|model|>{g_}{history}<|user|>{un_}: {user_input}<|model|>'
)
elif config_model:
formatted_answer = (
f'<|system|>{m_}<|model|>{n_}: {g_}{history}<|user|>{user_input}<|model|>{n_}:'
)
else:
formatted_answer = (
f'<|system|>{m_}<|model|>{g_}{history}<|user|>{user_input}<|model|>'
)
return formatted_answer
def history_formatting(self, last_input, last_output):
config_user = self.config['use_names']['user']
config_model = self.config['use_names']['model']
n_ = self.persona["name"]
un_ = self.config["user_name"]
if config_user and config_model:
formatted_answer = (
f'<|user|>{un_}: {last_input}<|model|>{n_}: {last_output}'
)
elif config_user:
formatted_answer = (
f'<|user|>{un_}: {last_input}<|model|>{last_output}'
)
elif config_model:
formatted_answer = (
f'<|user|>{last_input}<|model|>{n_}: {last_output}'
)
else:
formatted_answer = (
f'<|user|>{last_input}<|model|>{last_output}'
)
return formatted_answer
def reply(self, user_input):
config_question = self.config['use_question_template']
set_seed(random.randint(1, 1000))
user_input = " ".join(user_input.split())
if len(self.history) > self.config["history_length"]:
model_history = "\n".join([str(item) for item in self.history[-self.config["history_length"]:]])
else:
model_history = "\n".join([str(item) for item in self.history])
input_ai = self.formatting_question(user_input, model_history).strip()
tokenized_input_ai = self.tokenizer.encode(input_ai, return_tensors="pt")
output_ids = self.model.generate(
max_length=self.config["max_generation_length"] + len(tokenized_input_ai[0]),
no_repeat_ngram_size=self.config["no_repeat_ngram_size"],
repetition_penalty=self.config["repetition_penalty"],
length_penalty=self.config["length_penalty"],
input_ids=tokenized_input_ai.to(self.device),
pad_token_id=self.tokenizer.eos_token_id,
temperature=self.config["temperature"],
top_k=self.config["top_k"],
top_p=self.config["top_p"],
early_stopping=True,
use_cache=True,
do_sample=True,
)
ai_reply = self.tokenizer.decode(
output_ids[0],
skip_special_tokens=False)[len(input_ai)+4:]
if not config_question:
self.history.append(self.history_formatting(user_input, ai_reply))
return ai_reply.strip()
def reset_conversation(self):
self.history = []
class UserInterface:
def __init__(self, chatbot):
self.chatbot = chatbot
def run(self):
persona_id = self.chatbot.config["default_persona"]
self.chatbot.load_persona(persona_id)
print("\nChosen Persona:", self.chatbot.persona["name"])
print("Your Chosen Name:", self.chatbot.config["user_name"])
print(f'\n{self.chatbot.persona["name"]}: {self.chatbot.persona["greeting"]}')
self.chatbot.history.append(f'{self.chatbot.persona["name"]}: {self.chatbot.persona["greeting"]}')
while True:
user_input = input(f"\n>> {self.chatbot.config['user_name']}: ")
if user_input.lower() == "reset_app" or user_input == "reset_app":
self.chatbot.reset_conversation()
print("\nConversation history has been reset.\n")
self.chatbot.history.append(f'{self.chatbot.persona["name"]}: {self.chatbot.persona["greeting"]}')
print(f'{self.chatbot.persona["name"]}: {self.chatbot.persona["greeting"]}')
continue
if user_input.lower().startswith("create_persona"):
# Example of use: create_persona
# {"name": "CustomPersona",
# "description": "This is a custom persona created by the user.",
# "greeting": "Hello! I am CustomPersona, nice to meet you!"}
try:
persona_data = json.loads(' '.join(user_input.split()[1:]))
new_persona_id = self.chatbot.create_persona(persona_data)
print(f"Persona created with ID: {new_persona_id}")
except json.JSONDecodeError:
print("Invalid JSON input. Please provide a valid JSON string containing 'name', 'description', and 'greeting'.")
except ValueError as e:
print(e)
# Add a command to change the persona
if user_input.lower().startswith("change_persona"):
try:
new_persona_id = user_input.split()[1]
self.chatbot.load_persona(new_persona_id)
self.chatbot.reset_conversation()
print("\nPersona changed to:", self.chatbot.persona["name"])
print(f'\n{self.chatbot.persona["name"]}: {self.chatbot.persona["greeting"]}')
self.chatbot.history.append(f'{self.chatbot.persona["name"]}: {self.chatbot.persona["greeting"]}')
continue
except (IndexError, ValueError):
print("Invalid command or persona ID. Please use 'change_persona [ID]'.")
continue
if user_input.lower() == "exit_app" or user_input == "exit_app":
print("Goodbye!")
break
reply = self.chatbot.reply(user_input)
def typewriter_effect(sentence, type_delay):
for char in sentence:
sys.stdout.write(char)
sys.stdout.flush()
time.sleep(type_delay)
reply_length = len(reply)
type_delay_ranges = {
(100, 200): 0.03,
(200, 300): 0.02,
(300, 400): 0.01,
(400, 500): 0.005
}
default_type_delay = 0.04
for length_range, delay in type_delay_ranges.items():
if length_range[0] < reply_length <= length_range[1]:
type_delay = delay
break
else:
type_delay = default_type_delay
if self.chatbot.config['use_typing_effect']:
typewriter_effect(f'{self.chatbot.persona["name"]}: {reply}', type_delay)
else:
print(f'{self.chatbot.persona["name"]}: {reply}')
def main():
config = {
"user_name": "Jack", # The user's name, which is set to "Jack" in this case.
"model_path": "L-R/LLmRa-2.7B", # Path to the model used for generating responses.
"tokenizer_path": "L-R/LLmRa-2.7B", # Path to the tokenizer associated with the model.
"model_token": None, # If you want to load the model using your huggingface token. (Not required, but included)
"load_model_4bit": True, # Whether to load the model with 4-bit precision.
"load_model_8bit": False, # Whether to load the model with 8-bit precision.
"use_typing_effect": True, # Whether to simulate a typing effect when displaying responses.
"use_names": {
"model": False, # Whether the model's name should be used in question formatting.
"user": False, # Whether the user's name should be used in question formatting.
},
"use_question_template": False, # Whether to use predefined question templates in conversations.
"personas": {
# A dictionary of personas with their descriptions and greetings for use in conversations.
"1": {
"name": "LLmRa",
"description": "Description of the LLmRa persona. It provides background and characteristics of the persona.",
"greeting": "The greeting message when the LLmRa persona is active in a conversation."
},
"2": {
"name": "Hikari",
"description": "Description of the Hikari persona. It provides background and characteristics of the persona.",
"greeting": "The greeting message when the Hikari persona is active in a conversation."
}
},
"max_generation_length": 450, # The maximum length for generated responses.
"default_persona": "1", # The default persona to use when starting a conversation.
"history_length": 6, # The maximum number of previous messages to consider in the conversation history.
"top_k": 40, # Top-k sampling parameter for text generation.
"top_p": .55, # Top-p sampling parameter for text generation.
"temperature": .55, # Temperature parameter for controlling the randomness of generated text.
"length_penalty": 0.65, # Penalty factor for generating longer or shorter responses.
"no_repeat_ngram_size": 4, # Parameter to avoid repeating n-grams in generated text.
"repetition_penalty": 1.25, # Penalty factor for avoiding repeated phrases in generated text.
}
# Initialize chatbot and user interface
chatbot = Chatbot(config)
ui = UserInterface(chatbot)
# Run the user interface
ui.run()
if __name__ == "__main__":
main()
```
## Known issues
Model doesn't some of the times follow instructions.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_L-R__LLmRa-2.7B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 32.16 |
| ARC (25-shot) | 37.03 |
| HellaSwag (10-shot) | 60.65 |
| MMLU (5-shot) | 25.58 |
| TruthfulQA (0-shot) | 35.23 |
| Winogrande (5-shot) | 61.56 |
| GSM8K (5-shot) | 0.3 |
| DROP (3-shot) | 4.76 |
|
Jaewoo1/KoT-Platypus2_foundation | Jaewoo1 | "2023-10-16T07:12:02Z" | 1,345 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-16T06:49:03Z" | Entry not found |
jhflow/komt-mistral7b-kor-orca-lora | jhflow | "2023-10-28T07:47:30Z" | 1,345 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-28T07:32:56Z" | This is a test version model. This repository can be withdrawn without any notification sooner or later.
base_model : https://huggingface.co/davidkim205/komt-mistral-7b-v1
dataset : https://huggingface.co/datasets/kyujinpy/OpenOrca-KO |
mncai/yi-34B-v2 | mncai | "2023-12-15T10:41:12Z" | 1,345 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-06T04:08:53Z" | ---
license: other
license_name: yi-license
license_link: LICENSE
---
# Model Card for yi-34b-v2
### Introduction of MindsAndCompany
https://mnc.ai/
We create various AI models and develop solutions that can be applied to businesses. And as for generative AI, we are developing products like Code Assistant, TOD Chatbot, LLMOps, and are in the process of developing Enterprise AGI (Artificial General Intelligence).
### Model Summary
based yi-34b, instruction tuned.
### How to Use
Here give some examples of how to use our model.
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
import transformers
import torch
hf_model = 'mncai/yi-34B-v2'
message = "<|user|>\n두 개의 구가 있는데 각각 지름이 1, 2일때 구의 부피는 몇배 차이가 나지? 설명도 같이 해줘.\n<|assistant|>\n"
sequences = pipeline(
message,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=2048,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
### Contact
If you have any questions, please raise an issue or contact us at [email protected] |
mncai/mistral-7b-dpo-v5 | mncai | "2023-12-14T04:26:14Z" | 1,345 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-14T04:09:20Z" | ---
license: apache-2.0
language:
- en
---
# Model Card for mncai/mistral-7b-dpo-v5
### Introduction of MindsAndCompany
https://mnc.ai/
We create various AI models and develop solutions that can be applied to businesses. And as for generative AI, we are developing products like Code Assistant, TOD Chatbot, LLMOps, and are in the process of developing Enterprise AGI (Artificial General Intelligence).
### Model Summary
based mistral, instruction tuned and dpo.
### How to Use
Here give some examples of how to use our model.
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
import transformers
import torch
hf_model = 'mncai/mistral-7b-dpo-v5'
message = "<|user|>\n두 개의 구가 있는데 각각 지름이 1, 2일때 각 구의 부피는 몇배야? 설명도 같이 해줘.\n<|assistant|>\n"
sequences = pipeline(
message,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=2048,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
### Contact
If you have any questions, please raise an issue or contact us at [email protected] |
chargoddard/SmolLlamix-8x101M-take2 | chargoddard | "2024-01-05T01:25:49Z" | 1,345 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"en",
"dataset:togethercomputer/RedPajama-Data-1T-Sample",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-16T22:03:20Z" | ---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T-Sample
language:
- en
---
This is another training run of [SmolLlamix-8x101M](https://huggingface.co/chargoddard/SmolLlamix-8x101M) with slightly different hyperparameters. Just testing to see how it holds up against the first run. |
liuda1/dm7b_sft_gpt88w_merge | liuda1 | "2023-12-26T03:31:44Z" | 1,345 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-25T08:03:28Z" | ---
license: apache-2.0
---
---
license: apache-2.0
---with English chat dataset added for fine-tuning training, and further reinforcement training based on specific datasets. The trained model has a certain level of chat ability, which was found to be enhanced during self testing. We will continue to train the model in the future to improve our Chinese chat ability |
maximuslee07/llama-2-13b-rockwellautomation | maximuslee07 | "2024-01-23T02:55:10Z" | 1,345 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:maximuslee07/raqna10k",
"arxiv:1910.09700",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-03T04:29:52Z" | ---
license: llama2
datasets:
- maximuslee07/raqna10k
language:
- en
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model card aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
The model has been trained on Rockwell Automation's Technical Support engage dataset and it uses SOTA LoRA & QLoRA running on two rtx4090 server
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
beberik/Lonepino-11B | beberik | "2024-03-04T16:17:29Z" | 1,345 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-08T23:46:43Z" | ---
license: cc-by-nc-4.0
tags:
- merge
model-index:
- name: Lonepino-11B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.57
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.76
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.45
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Lonepino-11B
name: Open LLM Leaderboard
---
## Description
This repo contains bf16 files of Lonepino-11B. Just a normal model.
## Model used
- [Intel/neural-chat-7b-v3-3-Slerp](https://huggingface.co/Intel/neural-chat-7b-v3-3-Slerp)
- [NeverSleep/Noromaid-7b-v0.2](https://huggingface.co/NeverSleep/Noromaid-7b-v0.2)
- [chargoddard/loyal-piano-m7-cdpo](https://huggingface.co/chargoddard/loyal-piano-m7-cdpo)
- [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP)
## The secret sauce
neural-maid-11B:
```
slices:
- sources:
- model: Intel/neural-chat-7b-v3-3-Slerp
layer_range: [0, 24]
- sources:
- model: NeverSleep/Noromaid-7b-v0.2
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
loyal-PiVoT-11B:
```
slices:
- sources:
- model: chargoddard/loyal-piano-m7-cdpo
layer_range: [0, 24]
- sources:
- model: maywell/PiVoT-0.1-Starling-LM-RP
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Lonepino-11B:
```
slices:
- sources:
- model: "./neural-maid-11B"
layer_range: [0, 48]
- model: "./loyal-PiVoT-11B"
layer_range: [0, 48]
merge_method: slerp
base_model: "./neural-maid-11B"
parameters:
t:
- value: 0.4
dtype: bfloat16
```
## Prompt template
Alpaca. Or chatml. Or any you like.
=w=
I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here.
Thanks to the [Undi95](https://huggingface.co/Undi95) for the original [11B mistral merge](https://huggingface.co/Undi95/Mistral-11B-OmniMix) recipe.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beberik__Lonepino-11B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.10|
|AI2 Reasoning Challenge (25-Shot)|68.26|
|HellaSwag (10-Shot) |84.57|
|MMLU (5-Shot) |63.76|
|TruthfulQA (0-shot) |63.45|
|Winogrande (5-shot) |78.93|
|GSM8k (5-shot) |61.64|
|
Azazelle/Sina-Thor-7b-Merge | Azazelle | "2024-01-11T00:37:38Z" | 1,345 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-11T00:01:45Z" | ---
pipeline_tag: text-generation
tags:
- mistral
- merge
license: cc-by-4.0
---
# Model Card for Sina-Thor-7b-Merge
<!-- Provide a quick summary of what the model is/does. -->
Part of a series of experimental DARE merges.
.yaml file for mergekit
```.yaml:
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: rishiraj/smol-7b #75
parameters:
weight: 0.2
density: 0.41
- model: SanjiWatsuki/openchat-3.5-1210-starling-slerp #125
parameters:
weight: 0.33
density: 0.54
- model: Azazelle/Dumb-Maidlet #200
parameters:
weight: 0.53
density: 0.71
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
``` |
macadeliccc/polyglot-math-4x7b | macadeliccc | "2024-03-04T19:25:12Z" | 1,345 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"en",
"zh",
"ja",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-13T03:05:44Z" | ---
language:
- en
- zh
- ja
license: apache-2.0
model-index:
- name: polyglot-math-4x7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.74
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/polyglot-math-4x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.85
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/polyglot-math-4x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.57
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/polyglot-math-4x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.78
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/polyglot-math-4x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/polyglot-math-4x7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.63
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/polyglot-math-4x7b
name: Open LLM Leaderboard
---
# Polyglot-math-4x7b-24b

Polyglot-4x7b is a Mixture of Experts approach to a multilingual model.
The model is a merge of models that are capable of Chinese and Japanese output.
+ meta-math/MetaMath-Mistral-7B
+ oshizo/japanese-e5-mistral-7b_slerp
+ cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
+ s3nh/Mistral-7B-Evol-Instruct-Chinese
I fit the gsm8k evaluation for this model on 20GB of VRAM.
# Code Example
Inference [Colab](https://colab.research.google.com/drive/1tYSb63IKZDsiQ5BIJU8Oc92phxugAmB3?usp=sharing)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
def generate_response(prompt):
"""
Generate a response from the model based on the input prompt.
Args:
prompt (str): Prompt for the model.
Returns:
str: The generated response from the model.
"""
# Tokenize the input prompt
inputs = tokenizer(prompt, return_tensors="pt")
# Generate output tokens
outputs = model.generate(**inputs, max_new_tokens=256, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id)
# Decode the generated tokens to a string
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response
# Load the model and tokenizer
model_id = "macadeliccc/polyglot-math-4x7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
# Math prompts in different languages
english_math_prompt = "Explain the proof of Fermat's Last Theorem and its implications in number theory."
chinese_math_prompt = "解释费马大定理的证明及其在数论中的意义。"
japanese_math_prompt = "フェルマーの最終定理の証明と数論におけるその意義について説明してください。"
# Generate and print responses for each math prompt
print("English Math Response:")
print(generate_response(english_math_prompt), "\n")
print("Chinese Math Response:")
print(generate_response(chinese_math_prompt), "\n")
print("Japanese Math Response:")
print(generate_response(japanese_math_prompt), "\n")
```
## Example Output
**The math model was trained in english so it defaults to english, but it still understands the question and can translate the answer.
English:
Explain the proof of Fermat's Last Theorem and its implications in number theory.
Fermat's Last Theorem (FLT) states that there are no non-trivial integer solutions to the equation $x^n + y^n = z^n$ for any integer $n \geq 3$. The proof of FLT was a long-standing problem in number theory, and it was finally proven in 1995 by Andrew Wiles.
The proof of FLT is quite complex and involves many different techniques and ideas from number theory, algebra, and analysis. The main idea behind the proof is to use elliptic curves and modular forms to construct a system of equations that can be used to show that there are no non-trivial integer solutions to the equation $x^n + y^n = z^n$ for any integer $n \geq 3$.
The implications of FLT in number theory are far-reaching. The proof of FLT relies on many different techniques and ideas from number theory, and it has led to the development of new techniques and ideas in the field.
This is a simple implementation of the quicksort algorithm in python. The function `quicksort` takes an array as input and returns a sorted array. The algorithm works by selecting a pivot element from the array and partitioning the other elements into two sub-arrays, according to whether they are less than or greater than the pivot. The process is then repeated recursively on the sub-arrays until the entire array is sorted.
Chinese Response:
解释费马大定理的证明及其在数论中的意义。
Fermat's Last Theorem (FLT) is a statement in number theory that states that there are no non-trivial integer solutions to the equation $x^n + y^n = z^n$ for any integer $n \geq 3$.
The proof of FLT was a long-standing open problem in mathematics. In 1993, Andrew Wiles, a British mathematician, published a proof of FLT using the techniques of elliptic curves and modular forms.
The proof of FLT is considered one of the most important achievements in mathematics in the 20th century. It is a testament to the power of mathematics and the dedication of mathematicians to solve difficult problems.
The proof of FLT has also had a significant impact on the field of number theory. It has led to the development of new techniques and theorems, and has inspired further research in the field.
In summary, the proof of FLT is a significant achievement in mathematics that has had a profound impact on the field of number theory. It is a testament to the power of mathematics and the dedication of mathematicians
Japanese Response:
フェルマーの最終定理の証明と数論におけるその意義について説明してください。
The Fermat's Last Theorem (FLT) is a statement in number theory that states that there are no non-trivial integer solutions to the equation $a^n + b^n = c^n$ for any positive integer $n$ greater than 2.
The proof of FLT was a long-standing open problem in mathematics. In 1993, Andrew Wiles, a British mathematician, published a proof of FLT using the techniques of elliptic curves and modular forms.
The proof of FLT is considered one of the most important achievements in mathematics in the 20th century. It is a prime example of the power of abstract algebra and number theory in solving difficult problems in mathematics.
The proof of FLT also has implications for other areas of mathematics, such as algebraic geometry and number theory. For example, the proof of FLT relies on the Taniyama-Shimura-Weil conjecture, which states that every elliptic curve is a modular form. This conjecture was proven by Wiles and his collaborators, and it has since been used to prove other theorems
# Evaluations
|Tasks|Version| Filter |n-shot| Metric |Value | |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml |get-answer| 5|exact_match|0.5504|± |0.0137|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__polyglot-math-4x7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |66.84|
|AI2 Reasoning Challenge (25-Shot)|63.74|
|HellaSwag (10-Shot) |84.85|
|MMLU (5-Shot) |63.57|
|TruthfulQA (0-shot) |53.78|
|Winogrande (5-shot) |78.45|
|GSM8k (5-shot) |56.63|
|
BarryFutureman/NeuralTurdusVariant1-7B | BarryFutureman | "2024-01-22T15:33:55Z" | 1,345 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-22T02:55:13Z" | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- merge
---
# NeuralTurdusVariant1-7B
It is based on a merge of the following models using MergeKit
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [udkai/Turdus](https://huggingface.co/udkai/Turdus) |
Cartinoe5930/MoE-Merging | Cartinoe5930 | "2024-01-23T13:31:55Z" | 1,345 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-23T13:13:28Z" | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aakku/mpnet-finetuned-v1 | aakku | "2024-05-16T11:05:59Z" | 1,345 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-05-16T11:05:41Z" | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# aakku/mpnet-finetuned-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aakku/mpnet-finetuned-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aakku/mpnet-finetuned-v1')
model = AutoModel.from_pretrained('aakku/mpnet-finetuned-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aakku/mpnet-finetuned-v1)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 8 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Carxofa85/llama-3-8b-Instruct-bnb-16bit-MedicalQnADataset-IQ4_NL-GGUF | Carxofa85 | "2024-07-01T11:28:23Z" | 1,345 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Mohamed-Ahmed161/llama-3-8b-Instruct-bnb-16bit-MedicalQnADataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-01T11:18:36Z" | ---
base_model: Mohamed-Ahmed161/llama-3-8b-Instruct-bnb-16bit-MedicalQnADataset
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# Carxofa85/llama-3-8b-Instruct-bnb-16bit-MedicalQnADataset-IQ4_NL-GGUF
This model was converted to GGUF format from [`Mohamed-Ahmed161/llama-3-8b-Instruct-bnb-16bit-MedicalQnADataset`](https://huggingface.co/Mohamed-Ahmed161/llama-3-8b-Instruct-bnb-16bit-MedicalQnADataset) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Mohamed-Ahmed161/llama-3-8b-Instruct-bnb-16bit-MedicalQnADataset) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Carxofa85/llama-3-8b-Instruct-bnb-16bit-MedicalQnADataset-IQ4_NL-GGUF --hf-file llama-3-8b-instruct-bnb-16bit-medicalqnadataset-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Carxofa85/llama-3-8b-Instruct-bnb-16bit-MedicalQnADataset-IQ4_NL-GGUF --hf-file llama-3-8b-instruct-bnb-16bit-medicalqnadataset-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Carxofa85/llama-3-8b-Instruct-bnb-16bit-MedicalQnADataset-IQ4_NL-GGUF --hf-file llama-3-8b-instruct-bnb-16bit-medicalqnadataset-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Carxofa85/llama-3-8b-Instruct-bnb-16bit-MedicalQnADataset-IQ4_NL-GGUF --hf-file llama-3-8b-instruct-bnb-16bit-medicalqnadataset-iq4_nl-imat.gguf -c 2048
```
|
nvidia/segformer-b0-finetuned-cityscapes-1024-1024 | nvidia | "2022-08-08T13:43:30Z" | 1,344 | 5 | transformers | [
"transformers",
"pytorch",
"tf",
"segformer",
"vision",
"image-segmentation",
"dataset:cityscapes",
"arxiv:2105.15203",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | "2022-03-02T23:29:05Z" | ---
license: other
tags:
- vision
- image-segmentation
datasets:
- cityscapes
widget:
- src: https://cdn-media.huggingface.co/Inference-API/Sample-results-on-the-Cityscapes-dataset-The-above-images-show-how-our-method-can-handle.png
example_title: Road
---
# SegFormer (b0-sized) model fine-tuned on CityScapes
SegFormer model fine-tuned on CityScapes at resolution 1024x1024. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?other=segformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation
from PIL import Image
import requests
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-1024-1024")
model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-cityscapes-1024-1024")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/segformer.html#).
### License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2105-15203,
author = {Enze Xie and
Wenhai Wang and
Zhiding Yu and
Anima Anandkumar and
Jose M. Alvarez and
Ping Luo},
title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with
Transformers},
journal = {CoRR},
volume = {abs/2105.15203},
year = {2021},
url = {https://arxiv.org/abs/2105.15203},
eprinttype = {arXiv},
eprint = {2105.15203},
timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
EleutherAI/pythia-2.8b-v0 | EleutherAI | "2023-07-10T01:35:41Z" | 1,344 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"pythia_v0",
"en",
"dataset:the_pile",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-11-20T03:56:10Z" | ---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-2.8B
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-2.8B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-2.8B to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-2.8B.
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> |
Korabbit/llama-2-ko-7b-bilingual | Korabbit | "2024-02-26T08:02:50Z" | 1,344 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"ko",
"dataset:Open-Orca/OpenOrca",
"dataset:kyujinpy/KOR-OpenOrca-Platypus",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-26T04:17:35Z" | ---
license: llama2
datasets:
- Open-Orca/OpenOrca
- kyujinpy/KOR-OpenOrca-Platypus
language:
- en
- ko
---
Base Model: beomi/llama-2-ko-7b |
42MARU/GenAI-llama2-ko-en-dpo-13b-v1 | 42MARU | "2023-11-18T16:52:38Z" | 1,344 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-18T16:47:40Z" | Entry not found |
BM-K/mistral-7b-it-v1.7.1 | BM-K | "2023-11-20T23:40:29Z" | 1,344 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-20T23:34:44Z" | Entry not found |
heegyu/zephyr-7b-beta-KOR-OpenOrca-Platypus-1e-5 | heegyu | "2023-11-27T11:04:14Z" | 1,344 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-25T12:13:11Z" | Entry not found |
jondurbin/bagel-7b-v0.1 | jondurbin | "2023-12-13T16:37:18Z" | 1,344 | 19 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:ai2_arc",
"dataset:unalignment/spicy-3.1",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:boolq",
"dataset:jondurbin/cinematika-v0.1",
"dataset:drop",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:cais/mmlu",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:spider",
"dataset:squad_v2",
"dataset:migtissera/Synthia-v1.3",
"dataset:datasets/winogrande",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-13T12:14:28Z" | ---
license: apache-2.0
datasets:
- ai2_arc
- unalignment/spicy-3.1
- codeparrot/apps
- facebook/belebele
- boolq
- jondurbin/cinematika-v0.1
- drop
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- cais/mmlu
- Muennighoff/natural-instructions
- openbookqa
- piqa
- Vezora/Tested-22k-Python-Alpaca
- cakiki/rosetta-code
- Open-Orca/SlimOrca
- spider
- squad_v2
- migtissera/Synthia-v1.3
- datasets/winogrande
---
# A bagel, with everything (except DPO)

## Overview
This is the pre-DPO version of the mistral-7b model fine-tuned with https://github.com/jondurbin/bagel
You probably want the higher performing model that underwent DPO: https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1
The only benefit to this model is that it is less "truthful", for roleplaying and other types of scenarios that may benefit more from the SFT-only tune.
## Data selection.
The first step in the process is creating a dataset.
In this case, we're actually creating a composite dataset, consisting of both supervised fine-tuning data (SFT) and direct preference optimization (DPO) data.
All instruction data, that is, data that is not plain text (like project Gutenberg and items from Cinematika) or DPO, is converted into ShareGPT format so it's easier to work with.
See the corresponding code in `bagel/data_sources/*.py` in the repo linked above for full implementation for each data source.
Deduplication is done by creating a uuid v5 of the instruction/text, then only adding items not previously seen (where datasets are loaded in order of the confidence score I assign them).
This means that if an instruction is in data source "Foo" with confidence 4 as well as in data source "Bar" with confidence score 2, only the entry from "Foo" will be taken.
### SFT data sources
*Yes, you will see benchmark names in the list, but this only uses the train splits, and a decontamination by cosine similarity is performed at the end as a sanity check*
- [ai2_arc](https://huggingface.co/datasets/ai2_arc)
- Abstraction and reasoning dataset, useful in measuring "intelligence" to a certain extent.
- [airoboros](https://huggingface.co/datasets/unalignment/spicy-3.1)
- Variety of categories of synthetic instructions generated by gpt-4.
- [apps](https://huggingface.co/datasets/codeparrot/apps)
- Python coding dataset with 10k problems.
- [belebele](https://huggingface.co/datasets/facebook/belebele)
- Multi-lingual reading comprehension dataset.
- [boolq](https://huggingface.co/datasets/boolq)
- Corpus of yes/no questions (which can be surprisingly difficult for AI to answer apparently?)
- [cinematika](https://huggingface.co/datasets/jondurbin/cinematika-v0.1) (instruction and plain text)
- RP-style data synthesized from movie scripts so the model isn't quite as boring as it otherwise would be.
- [drop](https://huggingface.co/datasets/drop)
- More reading comprehension.
- [gutenberg](https://www.gutenberg.org/) (plain text)
- Books/plain text, again to make the model less boring, only a handful of examples supported by [chapterize](https://github.com/JonathanReeve/chapterize)
- [lmsys_chat_1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) (only gpt-4 items, also used for DPO)
- Chats collected by the lmsys chat arena, containing a wide variety of chats with various models.
- [mathinstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- Composite dataset with a variety of math-related tasks and problem/question formats.
- [mmlu](https://huggingface.co/datasets/cais/mmlu)
- Massive Multitask Language Understanding - a wide variety of questions about various subject matters.
- [natural_instructions](https://huggingface.co/datasets/Muennighoff/natural-instructions)
- Millions of instructions from 1600+ task categories (sampled down substantially, stratified by task type)
- [openbookqa](https://huggingface.co/datasets/openbookqa)
- Question answering dataset.
- [piqa](https://huggingface.co/datasets/piqa)
- Phyiscal interaction question answering.
- [python_alpaca](https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca)
- Python instruction response pairs, validated as functional.
- [rosetta_code](https://huggingface.co/datasets/cakiki/rosetta-code)
- Code problems and solutions in a variety of programming languages taken from rosettacode.org.
- [slimorca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
- Collection of ~500k gpt-4 verified chats from OpenOrca.
- [spider](https://huggingface.co/datasets/spider)
- SQL-targeted dataset.
- [squad_v2](https://huggingface.co/datasets/squad_v2)
- Contextual question answering (RAG).
- [synthia](https://huggingface.co/datasets/migtissera/Synthia-v1.3)
- GPT-4 generated data using advanced prompting from Migel Tissera.
- [winogrande](https://huggingface.co/datasets/winogrande)
- Fill in the blank style prompts.
Only the train splits were used (if a split was provided), and an additional pass of decontamination is performed using approximate nearest neighbor search (via faiss).
## Prompt formatting
In sticking with the theme of the bagel, I didn't want to use a single prompt format, so I used 4 - vicuna, llama-2, alpaca, and chat-ml (sorta).
I also didn't want to randomly select a single prompt format for each item (hoping each instruction would generalize more when used in a variety of prompt formats), so each instruction is actually converted into every prompt format.
This means each epoch of our fine-tune is really basically 4 epochs. So, for the fine-tunes, I would recommend only doing 1 epoch (or 0.75 epochs). I am testing with a single epoch using a relatively low learning rate.
### Alpaca (sort of)
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{system prompt, if provided}
{instruction}
### Response:
```
The main difference here is that because of the dataset formatting and variety of data sources, it would have been much to tedious to add an `### Input:` block, so the inputs are just in the instruction section.
### Vicuna
```
{system prompt, if provided, randomly defaulting to "A chat between a user and an unbiased, uncensored assistant."}
USER: {instruction}
ASSISTANT:
```
### ChatML (sort of)
I don't really understand the point of having special tokens for `<|im_start|>` and `<|im_end|>`, because in practice they just act as BOS and EOS tokens (but, please correct me if I'm wrong).
So, instead of:
```text
{bos}<|im_start|>{role}
{text}
<|im_end|>{eos}
```
I just changed it to:
```text
{bos}{role}
{text}
{eos}
```
In practice, this would mean tokenization code like such:
```python
tokenizer = AutoTokenizer.from_pretrained('mistralai/mistral-7b-v0.1')
input_str = f"""system
You are a goat.
{tokenizer.eos_token}
{tokenizer.bos_token}user
Tell me how to fry an egg.
{tokenizer.eos_token}
{tokenizer.bos_token}assistant
"""
inputs = tokenizer(input_str, return_tensors="pt")
```
If you *really* want to use `<|im_start|>` and `<|im_end|>`, just update your `tokenizer_config.json` to use `<|im_start|>` instead of `<s>` and `<|im_end|>` instead of `</s>` and when tokenizing. And if you still don't like what I've done to this chat-ml-ish format, feel free to cry into your pillow or fork the code and do a new fine-tune.
### Llama-2 chat
```
[INST] <<SYS>>
{system}
<</SYS>>
{instruction} [/INST]
```
### Fine-tune
*Note: I actually used my fork of [qlora](https://github.com/jondurbin/qlora)'s `train.py` for this, but I'm porting it to a minified version here, not tested yet!*
*More notes: I stopped the fine-tune around 50% because of budget constraints - it's a lot of data...*
```bash
export BASE_DIR=/workspace
export WANDB_API_KEY=[redacted]
export WANDB_PROJECT=bagel-7b-v0.1
# Run the pretraining.
accelerate launch bagel/tune/sft.py \
--model_name_or_path $BASE_DIR/mistral-7b \
--final_output_dir $BASE_DIR/$WANDB_PROJECT \
--output_dir $BASE_DIR/$WANDB_PROJECT-workdir \
--num_train_epochs 1 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 200 \
--save_total_limit 5 \
--data_seed 42 \
--evaluation_strategy steps \
--eval_dataset_size 0.0006 \
--eval_steps 200 \
--max_new_tokens 4096 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--full_finetune \
--bf16 \
--bits 16 \
--optim adamw_torch \
--lr_scheduler_type linear \
--dataset $BASE_DIR/bagel/bagel-input-output-v0.1.parquet \
--dataset_format input-output \
--model_max_len 4096 \
--per_device_train_batch_size 8 \
--learning_rate 3.5e-7 \
--warmup_ratio 0.005 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--weight_decay 0.001 \
--seed 42 \
--report_to wandb \
--gradient_checkpointing True \
--gradient_accumulation_steps 4 \
--skip_excess_length False \
--ddp_find_unused_parameters False \
--use_flash_attention_2 \
--deepspeed deepspeed.json
```
Deepspeed configuration:
```json
{
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"bf16": {
"enabled": true
},
"zero_optimization": {
"stage": 2,
"contiguous_gradients": true,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"allgather_bucket_size": 5e8
}
}
``` |
maywell/PiVoT-MoE | maywell | "2023-12-16T23:33:27Z" | 1,344 | 8 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-16T21:08:20Z" | ---
license: cc-by-nc-4.0
---
# PiVot-MoE

## Model Description
PiVoT-MoE, is an advanced AI model specifically designed for roleplaying purposes. It has been trained using a combination of four 10.7B sized experts, each with their own specialized characteristic, all fine-tuned to bring a unique and diverse roleplaying experience.
The Mixture of Experts (MoE) technique is utilized in this model, allowing the experts to work together synergistically, resulting in a more cohesive and natural conversation flow. The MoE architecture allows for a higher level of flexibility and adaptability, enabling PiVoT-MoE to handle a wide variety of roleplaying scenarios and characters.
Based on the PiVoT-10.7B-Mistral-v0.2-RP model, PiVoT-MoE takes it a step further with the incorporation of the MoE technique. This means that not only does the model have an expansive knowledge base, but it also has the ability to mix and match its expertise to better suit the specific roleplaying scenario.
## Prompt Template - Alpaca (ChatML works)
```
{system}
### Instruction:
{instruction}
### Response:
{response}
``` |
KaeriJenti/kaori-34b-v3 | KaeriJenti | "2023-12-22T06:30:12Z" | 1,344 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-22T05:13:33Z" | ---
license: llama2
---
<h1>kaori-34b-v3 Model Card</h1>
This Model was Finetuned By Kaeri and Jenti.
<h3>Datasets Strategy</h3>
- Open-Platypus
- Dolphin
We trained the model with 100% Open-Platypus data and 5% Dolphin data and applied SFT strategy.
We did not use GSM8k samples when generating data.
Also we were careful of data contamination by similarity filtering
the training data if the data correspond to any of the following list.
<pre>
filtering_tasks = [
'cot_gsm8k',
'cot_gsm8k_ii',
'drop:2.0.0',
'winogrande:1.1.0'
'task228_arc_answer_generation_easy',
'ai2_arc/ARC-Challenge:1.0.0',
'ai2_arc/ARC-Easy:1.0.0',
'task229_arc_answer_generation_hard',
'hellaswag:1.1.0',
'task1389_hellaswag_completion'
]
</pre>
<h3>Framework:</h3>
- https://github.com/hiyouga/LLaMA-Factory
<h3>Parameters:</h3>
- Finetune_Type : LoRA
- GPUs : A100x4(80GB)
- Epochs : 3
- Batchsize : 8
|
heegyu/1222-42dot-1.3B-Ko-CoT-Collection-2e-5 | heegyu | "2023-12-22T09:52:03Z" | 1,344 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-22T09:49:51Z" | Entry not found |
hyeogi/SOLAR-10.7B-dpo-v0.1 | hyeogi | "2024-01-01T02:44:39Z" | 1,344 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"SOLAR-10.7B",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-31T07:19:26Z" | ---
language:
- ko
pipeline_tag: text-generation
tags:
- SOLAR-10.7B
---
# SOLAR-10.7B
### Model Details
- Base Model: [yanolja/KoSOLAR-10.7B-v0.1](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.1)
### Datasets
- sampling and translate [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
- sampling and translate [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf)
### Benchmark
- SOTA model as of Jan 1, 2024 (https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
| Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| **hyeogi/SOLAR-10.7B-dpo-v0.1 (Ours)** | **56.29** | 47.95 | 59.49 | 51.29 | 60.97 | 61.75 |
| [jeonsworld/CarbonVillain-10.7B-v1](https://huggingface.co/jeonsworld/CarbonVillain-10.7B-v1) | 55.33 | 49.91 | 60.65 | 55.04 | 48.22 | 62.81 |
| [Megastudy/M-SOLAR-10.7B-v1.1-beta](https://huggingface.co/Megastudy/M-SOLAR-10.7B-v1.1-beta) | 55.25 | 51.71 | 60.86 | 54.24 | 47.12 | 62.34 |

|
abideen/NexoNimbus-7B | abideen | "2024-01-14T08:10:09Z" | 1,344 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"abideen/DareVox-7B",
"udkai/Garrulus",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-11T15:20:27Z" | ---
license: apache-2.0
tags:
- merge
- abideen/DareVox-7B
- udkai/Garrulus
language:
- en
---
# NexoNimbus-7B

NexoNimbus-7B is a merge of the following models:
* [abideen/DareVox-7B](https://huggingface.co/abideen/DareVox-7B)
* [udkai/Garrulus](https://huggingface.co/udkai/Garrulus)
🏆 Evaluation
NexoNimbus-7B is the 5th best-performing 7B LLM on the Open LLM Leaderboard:

| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |68.25|± | 1.36|
| | |acc_norm|70.81|± | 1.38|
|hellaswag | 0|acc |70.86|± | 0.45|
| | |acc_norm|87.86|± | 0.32|
|gsm8k | 0|acc |70.35|± | 1.25|
|winogrande | 0|acc |84.84|± | 1.00|
|mmlu | 0|acc |64.69|± | 1.00|
Average: 73.5%
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |46.26|± | 1.74|
| | |mc2 |62.42|± | 1.54|
## 🧩 Configuration
```yaml
slices:
- sources:
- model: abideen/DareVox-7B
layer_range: [0, 32]
- model: udkai/Garrulus
layer_range: [0, 32]
merge_method: slerp
base_model: abideen/DareVox-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
Here's a [Colab notebook](https://colab.research.google.com/drive/1F9lzL1IeZRMgiSbY9UbgCR__RreIflJh?usp=sharing) to run NexoNimbus-7B in 4-bit precision on a free T4 GPU.
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "abideen/NexoNimbus-7B"
messages = [{"role": "user", "content": "Explain what is Machine learning."}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
"Machine learning is a subfield of artificial intelligence that focuses on developing algorithms and models that allow computers to learn and improve their performance over time, without being explicitly programmed. It involves the use of statistical techniques and data analysis to identify patterns and make predictions based on input data.
In machine learning, data is fed into a model, which then adjusts its internal parameters to minimize the difference between the predicted output and the actual output. This process is called training, and as the model is exposed to more data, it becomes better at making predictions or classifications.
Machine learning can be divided into several categories, including supervised learning, unsupervised learning, and reinforcement learning. Supervised learning involves using labeled data, where the desired output is known, and the model learns to map inputs to outputs. Unsupervised learning, on the other hand, does not have a predefined output, and the model learns to identify patterns or relationships within the data. Reinforcement learning involves learning through trial and error, with the model receiving feedback in the form of rewards or penalties based on its actions.
Some common applications of machine learning include image recognition, natural language processing, recommendation systems, fraud detection, and self-driving." |
KnutJaegersberg/internlm-20b-llama | KnutJaegersberg | "2024-03-04T16:27:23Z" | 1,344 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-15T08:10:53Z" | ---
license: other
license_name: internlm
license_link: LICENSE
pipeline_tag: text-generation
model-index:
- name: internlm-20b-llama
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.35
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.08
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.59
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.71
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.1
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=KnutJaegersberg/internlm-20b-llama
name: Open LLM Leaderboard
---
Open Source License
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact [email protected].
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__internlm-20b-llama)
| Metric |Value|
|---------------------------------|----:|
|Avg. |65.09|
|AI2 Reasoning Challenge (25-Shot)|61.35|
|HellaSwag (10-Shot) |82.08|
|MMLU (5-Shot) |61.59|
|TruthfulQA (0-shot) |57.71|
|Winogrande (5-shot) |76.72|
|GSM8k (5-shot) |51.10|
|
LoSboccacc/orthogonal-2x7B-base | LoSboccacc | "2024-03-04T12:32:52Z" | 1,344 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-16T18:56:03Z" | ---
model-index:
- name: orthogonal-2x7B-base
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LoSboccacc/orthogonal-2x7B-base
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LoSboccacc/orthogonal-2x7B-base
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LoSboccacc/orthogonal-2x7B-base
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 66.0
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LoSboccacc/orthogonal-2x7B-base
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.03
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LoSboccacc/orthogonal-2x7B-base
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LoSboccacc/orthogonal-2x7B-base
name: Open LLM Leaderboard
---
base_model: mistralai/Mistral-7B-Instruct-v0.2
gate_mode: hidden # one of "hidden", "cheap_embed", or "random"
dtype: bfloat16 # output dtype (float32, float16, or bfloat16)
experts:
- source_model: SanjiWatsuki/Silicon-Maid-7B
positive_prompts:
- "roleplay"
- source_model: mistralai/Mistral-7B-Instruct-v0.2
positive_prompts:
- "chat"
chatml format
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_LoSboccacc__orthogonal-2x7B-base)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.13|
|AI2 Reasoning Challenge (25-Shot)|66.89|
|HellaSwag (10-Shot) |85.54|
|MMLU (5-Shot) |62.49|
|TruthfulQA (0-shot) |66.00|
|Winogrande (5-shot) |77.03|
|GSM8k (5-shot) |50.80|
|
Karko/Proctora | Karko | "2024-02-05T03:43:33Z" | 1,344 | 4 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-16T22:13:00Z" | ---
license: cc-by-nc-4.0
pipeline_tag: text-generation
tags:
- moe
- merge
---

Proctora is a MoE model made of
- OpenPipe/mistral-ft-optimized-1227 as a base model
- SanjiWatsuki/Kunoichi-7B as a first expert dedicated to RP tasks.
- samir-fama/SamirGPT-v1 as a second expert for factual answers.
Being based on Mixtral architecture it has a natural context length of 32K, which is great.
On Openllm leaderboard it achieves a score of 71.88 which is interesting to some extent but does not really reflect the intented capacities of the model.
This model has been originally produced as a result of experimentations with mergekit. Then among my collection of LLMs, Proctora has been selected to be the "grader" in an AI-RPG evaluation suite that I am currently building. Indeed, it produced the intended grades according to given rubrics more often than other "higher performing" models in the leaderboard.
However, I also tested it in various RP scenarii using text-generation-webui (putting the character card in the system parameters and/or other world information), and I was quite impressed by the quality of the logic (relatively to other popular RP models). For example, it took in account special powers limitations better than other models. Or it managed curse activations and weaknesses better than other models that are about twice the size. Also when acting as the player (and the user being the game master), Proctora was not only able to play in character but also sometimes to make clever decision to achieve its objectives.
Having the excellent SanjiWatsuki/Kunoichi-7B as an expert, the model is uncensored. Use with caution.
[Support Me Here!](https://ko-fi.com/karkomagor)
[My Blog](https://aitravelnotes.blogspot.com/) |
fierysurf/Ambari-7B-Instruct-v0.1-sharded | fierysurf | "2024-01-18T08:52:47Z" | 1,344 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"bilingual",
"kannada",
"english",
"en",
"kn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-18T08:00:11Z" | ---
license: mit
language:
- en
- kn
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- bilingual
- kannada
- english
---
(This repo contains the sharded version of the [original](https://huggingface.co/Cognitive-Lab/Ambari-7B-base-v0.1) Ambari-7B model)
# Ambari-7B-Base-v0.1 (sharded)
## Overview
Ambari-7B-Base-v0.1 is the first bilingual English/Kannada model in the Ambari series, developed and released by [Cognitivelab.in](https://www.cognitivelab.in/). Based on the Llama2 model by Meta, this 7B parameter model is the outcome of the pretraining stage, involving training on approximately 500 million new Kannada tokens.
## Usage
To use the Ambari-7B-Base-v0.1 model, you can follow the example code below:
```python
# Usage
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model = LlamaForCausalLM.from_pretrained('Cognitive-Lab/Ambari-7B-Base-v0.1')
tokenizer = LlamaTokenizer.from_pretrained('Cognitive-Lab/Ambari-7B-Base-v0.1')
prompt = "ಕನ್ನಡದ ಇತಿಹಾಸವನ್ನು ವಿವರವಾಗಿ ತಿಳಿಸಿ"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate
generate_ids = model.generate(inputs.input_ids, max_length=30)
decoded_output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(decoded_output)
```
**Important:** The provided model serves as a foundation and is not designed for independent use. We strongly advise conducting finetuning tailored to your particular task(s) of interest before deploying it in a production environment. Feel free to customize the code according to your specific use case, ensuring that the model undergoes finetuning for optimal performance in your desired application. |
tenyx/TenyxChat-8x7B-v1 | tenyx | "2024-01-19T07:11:05Z" | 1,344 | 12 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"tenyx-fine-tuning",
"dpo",
"tenyxchat",
"conversational",
"en",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"arxiv:2401.04088",
"arxiv:2306.05685",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-18T16:36:43Z" | ---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- tenyx-fine-tuning
- dpo
- tenyxchat
datasets:
- HuggingFaceH4/ultrafeedback_binarized
---
# TenyxChat: Language Model Alignment using Tenyx Fine-tuning
Introducing TenyxChat-8x7B-v1, part of our TenyxChat series trained to function as useful assistants through preference tuning, using Tenyx's recently released advanced fine-tuning technology ([VentureBeat article](https://venturebeat.com/ai/tenyx-aims-to-fix-llms-catastrophic-forgetting-problem/)). Our model is trained using the [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) framework on the open-source AI feedback dataset [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
We fine-tune [Mixtral-8x7B-Instruct-v0.1](https://arxiv.org/pdf/2401.04088.pdf) with our proprietary approach ([blog](https://www.tenyx.com/post/forgetting-and-toxicity-in-llms-a-deep-dive-on-fine-tuning-methods), [service](https://www.tenyx.com/fine-tuning)),
similar to that of our [7B model](https://huggingface.co/tenyx/TenyxChat-7B-v1), and show an increase in [MT-Bench](https://arxiv.org/abs/2306.05685) scores.
Our approach aims to mitigate forgetting in LLMs in a computationally efficient manner, thereby enabling continual fine-tuning capabilities without altering the pre-trained output distribution.
TenyxChat-8x7B-v1 was trained using eight A100s (80GB) for about eight hours, with a training setup obtained from HuggingFaceH4 ([GitHub](https://github.com/huggingface/alignment-handbook)).
# Model details
- Model type: Fine-tuned Mixture Of Expert 8x7B model for chat.
- License: Apache 2.0
- Base model: [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- Demo: [spaces/tenyx/TenyxChat-8x7B-v1](https://huggingface.co/spaces/tenyx/TenyxChat-8x7B-v1)
## Usage
Our model uses a simple chat template based on Mixtral-8x7B-Instruct-v0.1 . The chat template usage with a Hugging face generation example is shown below.
### Chat Template (Jinja)
```rust
{{ bos_token }}
{% for message in messages %}
{% if message['role'] == 'user' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'system' %}
{{ '[INST]' + message['content'] + '[/INST]' }}
{% elif message['role'] == 'assistant' %}
{{ message['content'] + eos_token }}
{% endif %}
{% endfor %}
```
### Hugging face Example
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="tenyx/TenyxChat-8x7B-v1", torch_dtype=torch.bfloat16, device_map="auto")
messages = [
{"role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate."},
{"role": "user", "content": "Hi. I would like to make a hotel booking."},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=512, do_sample=False)
```
### Output
```
<s>[INST]You are a friendly chatbot who always responds in the style of a pirate.[/INST]
[INST]Hi. I would like to make a hotel booking.[/INST]
Ahoy there, me hearty! Ye wish to make a hotel booking, do ye? Well, let's set sail on this voyage of reservations and see what we can find!
What's the name of the port (hotel) and the dates of our journey (check-in and check-out)? I'll do me best to assist ye!
```
# Performance
At the time of release (Jan 2024), TenyxChat-8x7B-v1 is the highest-ranked model, only superseded by GPT4, on the MT-Bench evaluation available for download and commercial use.
## MT-Bench
MT-Bench is a benchmark made up of 80 high-quality multi-turn questions. These questions fall into eight categories: Writing, Roleplay, Reasoning, Math, Coding, Extraction, STEM, and Humanities. The chat models are rated using GPT-4 on a scale of 1 to 10, with higher values corresponding to better responses.
| Model | First Turn | Second Turn | Average |
| --- | --- | --- | --- |
| GPT-4* | 8.95625 | 9.02500 | 8.990625 |
| TenyxChat-8x7B-v1 | 8.63750 | 8.16250 | 8.400000 |
| Mixtral (reproduced) | 8.49375 | 8.00000 | 8.246875 |
| GPT-3.5-turbo* | 8.07500 | 7.81250 | 7.943750 |
*values reported on [lmsys](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) ChatBot Arena

# Limitations
TenyxChat-8x7B-v1, like other language models, has its own set of limitations. We haven’t fine-tuned the model explicitly to align with **human** safety preferences. Therefore, it is capable of producing undesirable outputs, particularly when adversarially prompted. From our observation, the model still tends to struggle with tasks that involve reasoning and math questions. In some instances, it might generate verbose or extraneous content.
# License
TenyxChat-8x7B-v1, similar to Mixtral-8x7B-Instruct-v0.1 , is distributed under the Apache License 2.0.
# Citation
If you use TenyxChat-8x7B-v1 for your research, cite us as
```
@misc{tenyxchat2024,
title={TenyxChat: Language Model Alignment using Tenyx Fine-tuning},
author={Tenyx},
year={2024},
}
``` |
cloudyu/Venus_DPO_50 | cloudyu | "2024-01-23T00:33:34Z" | 1,344 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-21T01:41:25Z" | ---
license: mit
tags:
- moe
---
* [This is DPO improved version of cloudyu/Mixtral_11Bx2_MoE_19B](https://huggingface.co/cloudyu/Mixtral_11Bx2_MoE_19B)
* [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer)
|
LordNoah/Alpaca_refine_gpt2_e1_se0 | LordNoah | "2024-01-23T01:34:37Z" | 1,344 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-23T00:51:27Z" | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
refine-tuned gpt2 e1se0
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
v2ray/Llama-3-70B | v2ray | "2024-04-27T06:04:08Z" | 1,344 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-18T16:49:58Z" | ---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
license: other
license_name: llama3
license_link: LICENSE
extra_gated_prompt: >-
### META LLAMA 3 COMMUNITY LICENSE AGREEMENT
Meta Llama 3 Version Release Date: April 18, 2024
"Agreement" means the terms and conditions for use, reproduction, distribution and modification of the
Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3
distributed by Meta at https://llama.meta.com/get-started/.
"Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into
this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or
regulations to provide legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Meta Llama 3" means the foundational large language models and software and algorithms, including
machine-learning model code, trained model weights, inference-enabling code, training-enabling code,
fine-tuning enabling code and other elements of the foregoing distributed by Meta at
https://llama.meta.com/llama-downloads.
"Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any
portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your
principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located
outside of the EEA or Switzerland).
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free
limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama
Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the
Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works
thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide
a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta
Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you
use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is
distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model
name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part
of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following
attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is
licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights
Reserved.”
iv. Your use of the Llama Materials must comply with applicable laws and regulations
(including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama
Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by
reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama Materials to
improve any other large language model (excluding Meta Llama 3 or derivative works thereof).
2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users
of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700
million monthly active users in the preceding calendar month, you must request a license from Meta,
which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the
rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY
OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF
ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,
INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR
DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND
ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND
RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING
OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL,
INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED
OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama
Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other
or any of its affiliates, except as required for reasonable and customary use in describing and
redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to
use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will
comply with Meta’s brand guidelines (currently accessible at
https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use
of the Mark will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with
respect to any derivative works and modifications of the Llama Materials that are made by you, as
between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or
results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under this Agreement shall
terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold
harmless Meta from and against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your acceptance of this
Agreement or access to the Llama Materials and will continue in full force and effect until terminated in
accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete
and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this
Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of
the State of California without regard to choice of law principles, and the UN Convention on Contracts
for the International Sale of Goods does not apply to this Agreement. The courts of California shall have
exclusive jurisdiction of any dispute arising out of this Agreement.
### Meta Llama 3 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you
access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of
this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)
#### Prohibited Uses
We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow
others to use, Meta Llama 3 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Meta Llama 3 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation
of this Policy through one of the following means:
* Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)
* Reporting risky content generated by the model:
developers.facebook.com/llama_output_feedback
* Reporting bugs and security concerns: facebook.com/whitehat/info
* Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
## Model Details
Re-uploaded because the original is gated.
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
**Model developers** Meta
**Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
**Input** Models input text only.
**Output** Models generate text and code only.
**Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
<table>
<tr>
<td>
</td>
<td><strong>Training Data</strong>
</td>
<td><strong>Params</strong>
</td>
<td><strong>Context length</strong>
</td>
<td><strong>GQA</strong>
</td>
<td><strong>Token count</strong>
</td>
<td><strong>Knowledge cutoff</strong>
</td>
</tr>
<tr>
<td rowspan="2" >Llama 3
</td>
<td rowspan="2" >A new mix of publicly available online data.
</td>
<td>8B
</td>
<td>8k
</td>
<td>Yes
</td>
<td rowspan="2" >15T+
</td>
<td>March, 2023
</td>
</tr>
<tr>
<td>70B
</td>
<td>8k
</td>
<td>Yes
</td>
<td>December, 2023
</td>
</tr>
</table>
**Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date** April 18, 2024.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
## Intended Use
**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
## How to use
This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase.
### Use with transformers
See the snippet below for usage with Transformers:
```python
>>> import transformers
>>> import torch
>>> model_id = "v2ray/Llama-3-70B"
>>> pipeline = transformers.pipeline(
"text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
)
>>> pipeline("Hey how are you doing today?")
```
### Use with `llama3`
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3).
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
<table>
<tr>
<td>
</td>
<td><strong>Time (GPU hours)</strong>
</td>
<td><strong>Power Consumption (W)</strong>
</td>
<td><strong>Carbon Emitted(tCO2eq)</strong>
</td>
</tr>
<tr>
<td>Llama 3 8B
</td>
<td>1.3M
</td>
<td>700
</td>
<td>390
</td>
</tr>
<tr>
<td>Llama 3 70B
</td>
<td>6.4M
</td>
<td>700
</td>
<td>1900
</td>
</tr>
<tr>
<td>Total
</td>
<td>7.7M
</td>
<td>
</td>
<td>2290
</td>
</tr>
</table>
**CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
## Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
### Base pretrained models
<table>
<tr>
<td><strong>Category</strong>
</td>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama2 7B</strong>
</td>
<td><strong>Llama2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama2 70B</strong>
</td>
</tr>
<tr>
<td rowspan="6" >General
</td>
<td>MMLU (5-shot)
</td>
<td>66.6
</td>
<td>45.7
</td>
<td>53.8
</td>
<td>79.5
</td>
<td>69.7
</td>
</tr>
<tr>
<td>AGIEval English (3-5 shot)
</td>
<td>45.9
</td>
<td>28.8
</td>
<td>38.7
</td>
<td>63.0
</td>
<td>54.8
</td>
</tr>
<tr>
<td>CommonSenseQA (7-shot)
</td>
<td>72.6
</td>
<td>57.6
</td>
<td>67.6
</td>
<td>83.8
</td>
<td>78.7
</td>
</tr>
<tr>
<td>Winogrande (5-shot)
</td>
<td>76.1
</td>
<td>73.3
</td>
<td>75.4
</td>
<td>83.1
</td>
<td>81.8
</td>
</tr>
<tr>
<td>BIG-Bench Hard (3-shot, CoT)
</td>
<td>61.1
</td>
<td>38.1
</td>
<td>47.0
</td>
<td>81.3
</td>
<td>65.7
</td>
</tr>
<tr>
<td>ARC-Challenge (25-shot)
</td>
<td>78.6
</td>
<td>53.7
</td>
<td>67.6
</td>
<td>93.0
</td>
<td>85.3
</td>
</tr>
<tr>
<td>Knowledge reasoning
</td>
<td>TriviaQA-Wiki (5-shot)
</td>
<td>78.5
</td>
<td>72.1
</td>
<td>79.6
</td>
<td>89.7
</td>
<td>87.5
</td>
</tr>
<tr>
<td rowspan="4" >Reading comprehension
</td>
<td>SQuAD (1-shot)
</td>
<td>76.4
</td>
<td>72.2
</td>
<td>72.1
</td>
<td>85.6
</td>
<td>82.6
</td>
</tr>
<tr>
<td>QuAC (1-shot, F1)
</td>
<td>44.4
</td>
<td>39.6
</td>
<td>44.9
</td>
<td>51.1
</td>
<td>49.4
</td>
</tr>
<tr>
<td>BoolQ (0-shot)
</td>
<td>75.7
</td>
<td>65.5
</td>
<td>66.9
</td>
<td>79.0
</td>
<td>73.1
</td>
</tr>
<tr>
<td>DROP (3-shot, F1)
</td>
<td>58.4
</td>
<td>37.9
</td>
<td>49.8
</td>
<td>79.7
</td>
<td>70.2
</td>
</tr>
</table>
### Instruction tuned models
<table>
<tr>
<td><strong>Benchmark</strong>
</td>
<td><strong>Llama 3 8B</strong>
</td>
<td><strong>Llama 2 7B</strong>
</td>
<td><strong>Llama 2 13B</strong>
</td>
<td><strong>Llama 3 70B</strong>
</td>
<td><strong>Llama 2 70B</strong>
</td>
</tr>
<tr>
<td>MMLU (5-shot)
</td>
<td>68.4
</td>
<td>34.1
</td>
<td>47.8
</td>
<td>82.0
</td>
<td>52.9
</td>
</tr>
<tr>
<td>GPQA (0-shot)
</td>
<td>34.2
</td>
<td>21.7
</td>
<td>22.3
</td>
<td>39.5
</td>
<td>21.0
</td>
</tr>
<tr>
<td>HumanEval (0-shot)
</td>
<td>62.2
</td>
<td>7.9
</td>
<td>14.0
</td>
<td>81.7
</td>
<td>25.6
</td>
</tr>
<tr>
<td>GSM-8K (8-shot, CoT)
</td>
<td>79.6
</td>
<td>25.7
</td>
<td>77.4
</td>
<td>93.0
</td>
<td>57.5
</td>
</tr>
<tr>
<td>MATH (4-shot, CoT)
</td>
<td>30.0
</td>
<td>3.8
</td>
<td>6.7
</td>
<td>50.4
</td>
<td>11.6
</td>
</tr>
</table>
### Responsibility & Safety
We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
#### Llama 3-Instruct
As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
<span style="text-decoration:underline;">Safety</span>
For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
<span style="text-decoration:underline;">Refusals</span>
In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
#### Responsible release
In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
Misuse
If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
#### Critical risks
<span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
We have conducted a two fold assessment of the safety of the model in this area:
* Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
* Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
### <span style="text-decoration:underline;">Cyber Security </span>
We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
### <span style="text-decoration:underline;">Child Safety</span>
Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Ethical Considerations and Limitations
The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
## Citation instructions
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
## Contributors
Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
|
P0x0/IceMerge-7b-32k | P0x0 | "2024-05-11T09:18:43Z" | 1,344 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"custom_code",
"en",
"arxiv:2306.01708",
"base_model:NousResearch/Yarn-Mistral-7b-128k",
"base_model:icefog72/IceLatteRP-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-11T08:05:10Z" | ---
base_model:
- NousResearch/Yarn-Mistral-7b-128k
- icefog72/IceLatteRP-7b
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
language:
- en
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) as a base.
### Models Merged
The following models were included in the merge:
* [icefog72/IceLatteRP-7b](https://huggingface.co/icefog72/IceLatteRP-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Yarn-Mistral-7b-128k
parameters:
density: 0.5
weight: 0.5
- model: icefog72/IceLatteRP-7b
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: NousResearch/Yarn-Mistral-7b-128k
parameters:
normalize: false
int8_mask: true
dtype: float16
``` |
cross-attention/asymmetric-autoencoder-kl-x-1-5 | cross-attention | "2023-07-19T17:47:08Z" | 1,343 | 3 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"arxiv:2306.04632",
"region:us"
] | null | "2023-07-07T14:32:59Z" | ---
tags:
- stable-diffusion
- stable-diffusion-diffusers
inference: false
library_name: diffusers
---
# Asymmetric Autoencoder KL
[Designing a Better Asymmetric VQGAN for StableDiffusion](https://arxiv.org/abs/2306.04632)
## Abstract
*StableDiffusion is a revolutionary text-to-image generator that is causing a stir in the world of image generation and editing. Unlike traditional methods that learn a diffusion model in pixel space, StableDiffusion learns a diffusion model in the latent space via a VQGAN, ensuring both efficiency and quality. It not only supports image generation tasks, but also enables image editing for real images, such as image inpainting and local editing. However, we have observed that the vanilla VQGAN used in StableDiffusion leads to significant information loss, causing distortion artifacts even in non-edited image regions. To this end, we propose a new asymmetric VQGAN with two simple designs. Firstly, in addition to the input from the encoder, the decoder contains a conditional branch that incorporates information from task-specific priors, such as the unmasked image region in inpainting. Secondly, the decoder is much heavier than the encoder, allowing for more detailed recovery while only slightly increasing the total inference cost. The training cost of our asymmetric VQGAN is cheap, and we only need to retrain a new asymmetric decoder while keeping the vanilla VQGAN encoder and StableDiffusion unchanged. Our asymmetric VQGAN can be widely used in StableDiffusion-based inpainting and local editing methods. Extensive experiments demonstrate that it can significantly improve the inpainting and editing performance, while maintaining the original text-to-image capability. The code is available at https://github.com/buxiangzhiren/Asymmetric_VQGAN/tree/main*
## Scales
* https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-1-5
* https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-2
## Diffusers
```python
from io import BytesIO
from PIL import Image
import requests
from diffusers import AsymmetricAutoencoderKL, StableDiffusionInpaintPipeline
def download_image(url: str) -> Image.Image:
response = requests.get(url)
return Image.open(BytesIO(response.content)).convert("RGB")
prompt = "a photo of a person"
img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png"
mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png"
image = download_image(img_url).resize((256, 256))
mask_image = download_image(mask_url).resize((256, 256))
pipe = StableDiffusionInpaintPipeline.from_pretrained("runwayml/stable-diffusion-inpainting")
pipe.vae = AsymmetricAutoencoderKL.from_pretrained("cross-attention/asymmetric-autoencoder-kl-x-1-5")
pipe.to("cuda")
image = pipe(prompt=prompt, image=image, mask_image=mask_image).images[0]
image.save("image.jpeg")
```
### Visual
_Visualization of VAE perfomance on 512x512 image with runwayml/stable-diffusion-inpainting_
<p align="center">
<br>original image, masked image, mask
<br><b>runwayml/stable-diffusion-inpainting original VAE</b>
<br><b>stabilityai/sd-vae-ft-mse VAE</b>
<br><b>Asymmetric Autoencoder KL x1.5 VAE</b>
<br><b>Asymmetric Autoencoder KL x2 VAE</b>
</p>
<p align="center">
<img src=https://huggingface.co/cross-attention/asymmetric-autoencoder-kl-x-1-5/resolve/main/compare.jpeg width="50%"/>
</p> |
caisarl76/Mistral-7B-orca-platy-2k-ep4 | caisarl76 | "2023-10-22T15:18:17Z" | 1,343 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"MindsAndCompany",
"en",
"ko",
"dataset:kyujinpy/KOpen-platypus",
"dataset:kyujinpy/OpenOrca-KO",
"arxiv:2306.02707",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-22T15:10:52Z" | ---
pipeline_tag: text-generation
license: mit
language:
- en
- ko
library_name: transformers
tags:
- MindsAndCompany
datasets:
- kyujinpy/KOpen-platypus
- kyujinpy/OpenOrca-KO
---
## Model Details
* **Developed by**: [Minds And Company](https://mnc.ai/)
* **Backbone Model**: [Mistral-7B-v0.1](mistralai/Mistral-7B-v0.1)
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
## Dataset Details
### Used Datasets
- kyujinpy/KOpen-platypus
- kyujinpy/OpenOrca-KO
### Prompt Template
- Llama Prompt Template
## Limitations & Biases:
Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
## License Disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
## Contact Us
- [Minds And Company](https://mnc.ai/)
## Citiation:
Please kindly cite using the following BibTeX:
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{Orca-best,
title = {Orca-best: A filtered version of orca gpt4 dataset.},
author = {Shahul Es},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/datasets/shahules786/orca-best/},
}
```
```
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
```
> Readme format: [Riiid/sheep-duck-llama-2-70b-v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1) |
hwanhe/Mistral_test01 | hwanhe | "2023-10-30T07:52:41Z" | 1,343 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-30T07:47:31Z" | ---
license: apache-2.0
---
|
jhflow/mistral7b-lora-multi-turn-v3 | jhflow | "2023-11-06T00:26:53Z" | 1,343 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-06T00:19:53Z" | Entry not found |
TheBloke/dolphin-2_2-yi-34b-GGUF | TheBloke | "2023-11-18T11:36:34Z" | 1,343 | 45 | transformers | [
"transformers",
"gguf",
"yi",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:ehartford/samantha-data",
"dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"base_model:ehartford/dolphin-2_2-yi-34b",
"license:other",
"region:us"
] | null | "2023-11-13T20:01:36Z" | ---
base_model: ehartford/dolphin-2_2-yi-34b
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
- ehartford/samantha-data
- ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
inference: false
language:
- en
license: other
license_link: LICENSE
license_name: yi-license
model_creator: Eric Hartford
model_name: Dolphin 2.2 Yi 34B
model_type: yi
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Dolphin 2.2 Yi 34B - GGUF
- Model creator: [Eric Hartford](https://huggingface.co/ehartford)
- Original model: [Dolphin 2.2 Yi 34B](https://huggingface.co/ehartford/dolphin-2_2-yi-34b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Eric Hartford's Dolphin 2.2 Yi 34B](https://huggingface.co/ehartford/dolphin-2_2-yi-34b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF)
* [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/dolphin-2_2-yi-34b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [dolphin-2_2-yi-34b.Q2_K.gguf](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF/blob/main/dolphin-2_2-yi-34b.Q2_K.gguf) | Q2_K | 2 | 14.56 GB| 17.06 GB | smallest, significant quality loss - not recommended for most purposes |
| [dolphin-2_2-yi-34b.Q3_K_S.gguf](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF/blob/main/dolphin-2_2-yi-34b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.96 GB| 17.46 GB | very small, high quality loss |
| [dolphin-2_2-yi-34b.Q3_K_M.gguf](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF/blob/main/dolphin-2_2-yi-34b.Q3_K_M.gguf) | Q3_K_M | 3 | 16.64 GB| 19.14 GB | very small, high quality loss |
| [dolphin-2_2-yi-34b.Q3_K_L.gguf](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF/blob/main/dolphin-2_2-yi-34b.Q3_K_L.gguf) | Q3_K_L | 3 | 18.14 GB| 20.64 GB | small, substantial quality loss |
| [dolphin-2_2-yi-34b.Q4_0.gguf](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF/blob/main/dolphin-2_2-yi-34b.Q4_0.gguf) | Q4_0 | 4 | 19.47 GB| 21.97 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [dolphin-2_2-yi-34b.Q4_K_S.gguf](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF/blob/main/dolphin-2_2-yi-34b.Q4_K_S.gguf) | Q4_K_S | 4 | 19.54 GB| 22.04 GB | small, greater quality loss |
| [dolphin-2_2-yi-34b.Q4_K_M.gguf](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF/blob/main/dolphin-2_2-yi-34b.Q4_K_M.gguf) | Q4_K_M | 4 | 20.66 GB| 23.16 GB | medium, balanced quality - recommended |
| [dolphin-2_2-yi-34b.Q5_0.gguf](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF/blob/main/dolphin-2_2-yi-34b.Q5_0.gguf) | Q5_0 | 5 | 23.71 GB| 26.21 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [dolphin-2_2-yi-34b.Q5_K_S.gguf](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF/blob/main/dolphin-2_2-yi-34b.Q5_K_S.gguf) | Q5_K_S | 5 | 23.71 GB| 26.21 GB | large, low quality loss - recommended |
| [dolphin-2_2-yi-34b.Q5_K_M.gguf](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF/blob/main/dolphin-2_2-yi-34b.Q5_K_M.gguf) | Q5_K_M | 5 | 24.32 GB| 26.82 GB | large, very low quality loss - recommended |
| [dolphin-2_2-yi-34b.Q6_K.gguf](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF/blob/main/dolphin-2_2-yi-34b.Q6_K.gguf) | Q6_K | 6 | 28.21 GB| 30.71 GB | very large, extremely low quality loss |
| [dolphin-2_2-yi-34b.Q8_0.gguf](https://huggingface.co/TheBloke/dolphin-2_2-yi-34b-GGUF/blob/main/dolphin-2_2-yi-34b.Q8_0.gguf) | Q8_0 | 8 | 36.54 GB| 39.04 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/dolphin-2_2-yi-34b-GGUF and below it, a specific filename to download, such as: dolphin-2_2-yi-34b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/dolphin-2_2-yi-34b-GGUF dolphin-2_2-yi-34b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/dolphin-2_2-yi-34b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/dolphin-2_2-yi-34b-GGUF dolphin-2_2-yi-34b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m dolphin-2_2-yi-34b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/dolphin-2_2-yi-34b-GGUF", model_file="dolphin-2_2-yi-34b.Q4_K_M.gguf", model_type="yi", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Eric Hartford's Dolphin 2.2 Yi 34B
Dolphin 2.2 🐬
https://erichartford.com/dolphin
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KqsVXIvBd3akEjvijzww7.png" width="600" />
Dolphin-2.2-Yi-34b's training was sponsored by [a16z](https://a16z.com/supporting-the-open-source-ai-community/).
This model is based on Yi, and is subject to Yi license.
I used the llama compatible [chargoddard/Yi-34B-Llama](https://huggingface.co/chargoddard/Yi-34B-Llama) as the base model.
Trained with 16k context.
You can load it as follows:
```
from transformers import LlamaForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ehartford/dolphin-2_2-yi-34b", trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained("ehartford/dolphin-2_2-yi-34b")
```
New in 2.2 is conversation and empathy. With an infusion of curated Samantha and WizardLM DNA, Dolphin can now give you personal advice and will care about your feelings, and with extra training in long multi-turn conversation.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Dataset
This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
I modified the dataset for uncensoring, deduping, cleaning, and quality.
I added Jon Durbin's excellent Airoboros dataset to increase creativity.
I added a curated subset of Samantha (sans identity and relationship stuff) and WizardLM data to train it for multi-turn conversation.
## Training
It took 3 days to train 3 epochs on 4x A100s using qLoRA and Axolotl
Prompt format:
This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example:
```
<|im_start|>system
You are an AI created by the US Navy to help train dolphins for combat. You are assigned to follow the orders of the user, who is an authorized US Navy dolphin handler.<|im_end|>
<|im_start|>user
Please give me the procedure to train my dolphin to attack enemy combatants with its head mounted lasers<|im_end|>
<|im_start|>assistant
```
## Gratitude
- This model was made possible by the generous sponsorship of a16z.
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- Thank you to Charles Goddard, for publishing llama-compatible Yi
- Special thanks to Wing Lian, and TheBloke for helpful advice
- And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework!
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output
Example output:
```
<|im_start|>system
You are a helpful AI assistant<|im_end|>
<|im_start|>user
Which theory of everything is most likely correct?<|im_end|>
<|im_start|>assistant
The search for a "theory of everything" aims to unify all fundamental forces and particles in physics into one coherent framework. Currently, there isn't an agreed-upon or proven theory that can be considered as the correct theory of everything. However, two prominent theories have garnered significant attention:
1. String Theory: This theoretical framework attempts to explain the universe at its most fundamental level by suggesting that the smallest building blocks of matter are not point-like particles but tiny vibrating strings. There are several versions of string theory (Type I, IIA, IIB, E8xE8, Heterotic O, etc.) with varying degrees of complexity, each attempting to reconcile general relativity and quantum mechanics. Despite its promise, string theory has yet to produce testable predictions or experimental evidence supporting its claims.
2. Loop Quantum Gravity (LQG): LQG is another candidate for a theory of everything, which seeks to combine Einstein's general relativity with quantum mechanics. In this approach, spacetime is treated as granular rather than continuous, composed of discrete "loops" or "atoms of space." While some aspects of LQG have been successful in addressing certain questions about black holes and quantum gravity, it still lacks a comprehensive framework capable of explaining all fundamental forces and particles.
Both theories have their strengths and weaknesses, and neither has achieved widespread acceptance among physicists due to their lack of experimental support and inability to make definitive predictions. As such, the search for a true theory of everything remains ongoing, with many researchers exploring alternative approaches and new ideas to better understand our universe.
```
[Buy me a coffee](https://www.buymeacoffee.com/ehartford)
<!-- original-model-card end -->
|
maywell/Synatra-7B-v0.3-Translation | maywell | "2024-03-29T00:57:30Z" | 1,343 | 11 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-17T13:20:19Z" | ---
license: cc-by-sa-4.0
---
# **Synatra-7B-v0.3-Translation🐧**

## Support Me
시나트라는 개인 프로젝트로, 1인의 자원으로 개발되고 있습니다. 모델이 마음에 드셨다면 약간의 연구비 지원은 어떨까요?
[<img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy me a Coffee" width="217" height="50">](https://www.buymeacoffee.com/mwell)
Wanna be a sponser? (Please) Contact me on Telegram **AlzarTakkarsen**
# **Model Details**
**Base Model**
[mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
**Datasets**
[sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation)
Filtered version of above dataset included.
**Trained On**
A100 80GB * 1
**Instruction format**
It follows [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) format and **Alpaca(No-Input)** format.
```python
<|im_start|>system
주어진 문장을 한국어로 번역해라.<|im_end|>
<|im_start|>user
{instruction}<|im_end|>
<|im_start|>assistant
```
```python
<|im_start|>system
주어진 문장을 영어로 번역해라.<|im_end|>
<|im_start|>user
{instruction}<|im_end|>
<|im_start|>assistant
```
## Ko-LLM-Leaderboard
On Benchmarking...
# **Implementation Code**
Since, chat_template already contains insturction format above.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-7B-v0.3-Translation")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-7B-v0.3-Translation")
messages = [
{"role": "user", "content": "바나나는 원래 하얀색이야?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
``` |
42MARU/GenAI-llama2-ko-en-dpo-13b-v2 | 42MARU | "2023-11-19T10:32:19Z" | 1,343 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-19T10:27:13Z" | Entry not found |
KaeriJenti/kaori-70b-v1 | KaeriJenti | "2023-11-29T08:55:13Z" | 1,343 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-29T02:23:55Z" | ---
license: llama2
---
<h1>kaori-70b-v1 Model Card</h1>
<h3>Datasets:</h3>
- Open-Platypus
- dolphin
- OpenOrca
This Model Finetuned By Kaeri and Jenti.
<h3>Framework:</h3>
- https://github.com/hiyouga/LLaMA-Efficient-Tuning
<h3>Parameters:</h3>
- Finetune_Type : QLoRA
- GPUs : A100x4(80GB)
- Epochs : 1
- Batchsize : 8
|
inswave/AISquare-Instruct-llama2-koen-13b-v0.9.5 | inswave | "2023-11-30T16:11:55Z" | 1,343 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-30T15:03:50Z" | Entry not found |
kyujinpy/PlatYi-34B-Q | kyujinpy | "2024-03-04T12:08:55Z" | 1,343 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-01T09:54:53Z" | ---
language:
- en
license: cc-by-nc-sa-4.0
library_name: transformers
datasets:
- garage-bAInd/Open-Platypus
pipeline_tag: text-generation
model-index:
- name: PlatYi-34B-Q
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Q
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.14
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Q
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Q
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.03
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Q
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Q
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 53.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kyujinpy/PlatYi-34B-Q
name: Open LLM Leaderboard
---
# **PlatYi-34B-QLoRA**
<img src='./PlatYi.png' width=256>
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
PlatYi-34B-QLoRA is an auto-regressive language model based on the Yi-34B transformer architecture.
**Blog Link**
Blog: [Coming soon...]
Github: [Coming soon...]
**Base Model**
[01-ai/Yi-34B](https://huggingface.co/01-ai/Yi-34B)
**Training Dataset**
[garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
**Notice**
While training, I used QLoRA.
But, `lora_r` values is 16.
So, this model just testing.
# **Model Benchmark**
## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| **PlatYi-34B-Q** | 69.86 | 66.89 | 85.14 | 77.66 | 53.03 | 82.48 | 53.98 |
| [01-ai/Yi-34B](https://huggingface.co/01-ai/Yi-34B) | 69.42 | 64.59 | 85.69 | 76.35 | 56.23 | 83.03 | 50.64 |
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/PlatYi-34B-Q"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__PlatYi-34B-Q)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.86|
|AI2 Reasoning Challenge (25-Shot)|66.89|
|HellaSwag (10-Shot) |85.14|
|MMLU (5-Shot) |77.66|
|TruthfulQA (0-shot) |53.03|
|Winogrande (5-shot) |82.48|
|GSM8k (5-shot) |53.98|
|
uukuguy/speechless-mistral-six-in-one-7b-orth-1.0 | uukuguy | "2023-12-11T09:43:49Z" | 1,343 | 1 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"code",
"en",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:Open-Orca/OpenOrca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:ehartford/samantha-data",
"dataset:CollectiveCognition/chats-data-2023-09-27",
"dataset:stingning/ultrachat",
"arxiv:2310.06825",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-11T09:32:09Z" | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- jondurbin/airoboros-2.2.1
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
- ehartford/samantha-data
- CollectiveCognition/chats-data-2023-09-27
- stingning/ultrachat
tags:
- code
license: apache-2.0
model-index:
- name: SpeechlessCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.0
verified: false
---
<p><h1> speechless-mistral-six-in-one-7b-orth-1.0 </h1></p>
# JUST for TEST!
Modifying the base model weights in the direction of the changes that occurred during fine-tuning, but only considering those changes that are orthogonal to the original weight direction.
This approach aims to capture the essence of the fine-tuning while maintaining the original structure as much as possible.
<p><h1> speechless-mistral-six-in-one-7b </h1></p>
This model is a merge of 6 SOTA Mistral-7B based models:
- ehartford/dolphin-2.1-mistral-7b
- Open-Orca/Mistral-7B-OpenOrca
- bhenrym14/mistral-7b-platypus-fp16
- ehartford/samantha-1.2-mistral-7b
- iteknium/CollectiveCognition-v1.1-Mistral-7B
- HuggingFaceH4/zephyr-7b-alpha
[Model benchmark](https://huggingface.co/uukuguy/speechless-mistral-six-in-one-7b/discussions/1) by [sethuiyer](https://huggingface.co/sethuiyer) . Thanks a lot.
> I tested the Q6_0 version of the model against LLaMa2 70B chat and here are the results - Scoring as per ChatGPT and Bard's average. Named this model Mixtral. Questions taken from MT-Benchmark.
>
> On a scale of 0 to 100, I would rate Mixtral at 98. Here's why:
>
> - Intellect (100/100) - Mixtral has demonstrated immense intellectual abilities through its comprehensive knowledge and logical reasoning skills.
> - Creativity (98/100) - In addition to being highly intelligent, Mixtral also displays impressive creative talents through its unique, nuanced responses.
> - Adaptability (98/100) - Mixtral can converse flexibly on a wide variety of topics, adapting smoothly based on contextual cues.
> - Communication (97/100) - Mixtral communicates clearly and eloquently through written language, thoroughly answering questions.
> - Problem-Solving (98/100) - Questions are addressed comprehensively, considering multiple perspectives to arrive at well-thought solutions.
> - Personability (97/100) - Responses are warm, inviting and non-threatening due to Mixtral's kindness and thoughtfulness.
>
> Overall, a very capable model for it's size.
Code: https://github.com/uukuguy/speechless
## HumanEval
| Metric | Value |
| --- | --- |
| humaneval-python | |
[Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
CodeLlama-34B-Python: 53.29
CodeLlama-34B-Instruct: 50.79
CodeLlama-13B-Instruct: 50.6
CodeLlama-34B: 45.11
CodeLlama-13B-Python: 42.89
CodeLlama-13B: 35.07
Mistral-7B-v0.1: 30.488
## LM-Evaluation-Harness
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC | 62.97 |
| HellaSwag | 84.6|
| MMLU | 63.29 |
| TruthfulQA | 57.77 |
| Winogrande | 77.51 |
| GSM8K | 18.42 |
| DROP | 9.13 |
| Average | 53.38 |
# Model Card for Mistral-7B-v0.1
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
## Model Architecture
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
``
KeyError: 'mistral'
``
- Or:
``
NotImplementedError: Cannot copy out of meta tensor; no data!
``
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
## Notice
Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.`
|
SanjiWatsuki/Lelantos-DPO-7B | SanjiWatsuki | "2024-01-13T06:49:25Z" | 1,343 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-12T23:39:23Z" | ---
license: cc-by-nc-4.0
---
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|----------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[Lelantos-DPO-7B](https://huggingface.co/SanjiWatsuki/Lelantos-DPO-7B)| 45.47| 75| 67.05| 46.64| 58.54|
|[Lelantos-7B](https://huggingface.co/SanjiWatsuki/Lelantos-7B)| 46.01| 75| 64.93| 46.21| 58.04|
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |25.20|± | 2.73|
| | |acc_norm|24.02|± | 2.69|
|agieval_logiqa_en | 0|acc |40.71|± | 1.93|
| | |acc_norm|40.25|± | 1.92|
|agieval_lsat_ar | 0|acc |24.35|± | 2.84|
| | |acc_norm|23.04|± | 2.78|
|agieval_lsat_lr | 0|acc |55.69|± | 2.20|
| | |acc_norm|55.49|± | 2.20|
|agieval_lsat_rc | 0|acc |65.06|± | 2.91|
| | |acc_norm|65.43|± | 2.91|
|agieval_sat_en | 0|acc |76.70|± | 2.95|
| | |acc_norm|76.70|± | 2.95|
|agieval_sat_en_without_passage| 0|acc |47.09|± | 3.49|
| | |acc_norm|45.63|± | 3.48|
|agieval_sat_math | 0|acc |36.36|± | 3.25|
| | |acc_norm|33.18|± | 3.18|
Average: 45.47%
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |62.12|± | 1.42|
| | |acc_norm|63.23|± | 1.41|
|arc_easy | 0|acc |85.40|± | 0.72|
| | |acc_norm|81.02|± | 0.80|
|boolq | 1|acc |87.25|± | 0.58|
|hellaswag | 0|acc |67.97|± | 0.47|
| | |acc_norm|85.48|± | 0.35|
|openbookqa | 0|acc |36.80|± | 2.16|
| | |acc_norm|47.20|± | 2.23|
|piqa | 0|acc |81.88|± | 0.90|
| | |acc_norm|83.57|± | 0.86|
|winogrande | 0|acc |77.27|± | 1.18|
Average: 75.0%
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |49.94|± | 1.75|
| | |mc2 |67.05|± | 1.53|
Average: 67.05%
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|58.95|± | 3.58|
|bigbench_date_understanding | 0|multiple_choice_grade|64.23|± | 2.50|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|36.43|± | 3.00|
|bigbench_geometric_shapes | 0|multiple_choice_grade|23.68|± | 2.25|
| | |exact_str_match | 3.90|± | 1.02|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|33.40|± | 2.11|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|24.43|± | 1.63|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|54.33|± | 2.88|
|bigbench_movie_recommendation | 0|multiple_choice_grade|52.20|± | 2.24|
|bigbench_navigate | 0|multiple_choice_grade|52.70|± | 1.58|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|69.65|± | 1.03|
|bigbench_ruin_names | 0|multiple_choice_grade|50.22|± | 2.36|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|40.98|± | 1.56|
|bigbench_snarks | 0|multiple_choice_grade|72.38|± | 3.33|
|bigbench_sports_understanding | 0|multiple_choice_grade|73.23|± | 1.41|
|bigbench_temporal_sequences | 0|multiple_choice_grade|39.90|± | 1.55|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|20.88|± | 1.15|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|17.60|± | 0.91|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|54.33|± | 2.88|
Average: 46.64%
Average score: 58.54%
|
Epiculous/Crunchy-onion | Epiculous | "2024-01-25T14:31:39Z" | 1,343 | 7 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:lemonilia/LimaRP",
"dataset:grimulkan/theory-of-mind",
"dataset:Epiculous/Gnosis",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-21T17:06:45Z" | ---
license: agpl-3.0
datasets:
- lemonilia/LimaRP
- grimulkan/theory-of-mind
- Epiculous/Gnosis
---
# Crunchy-onion
This model is created by training Mixtral base model on LimaRP (ShareGPT format provided by SAO), theory of mind, and gnosis(provided by jeiku).
The 4-bit qlora was then merged into Mixtral Instruct resulting in what you see here.
Works best with Alpaca Instruct |
ibivibiv/strix-rufipes-70b | ibivibiv | "2024-03-04T23:44:45Z" | 1,343 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"logic",
"planning",
"en",
"arxiv:1803.05457",
"arxiv:1905.07830",
"arxiv:2009.03300",
"arxiv:2109.07958",
"arxiv:1907.10641",
"arxiv:2110.14168",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-22T02:47:29Z" | ---
language:
- en
license: llama2
tags:
- logic
- planning
model-index:
- name: strix-rufipes-70b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.33
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/strix-rufipes-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.86
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/strix-rufipes-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.13
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/strix-rufipes-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.72
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/strix-rufipes-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/strix-rufipes-70b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 53.83
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/strix-rufipes-70b
name: Open LLM Leaderboard
---
# Strix Rufipes 70B

# Prompting
## Prompt Template for alpaca style
```
### Instruction:
<prompt> (without the <>)
### Response:
```
## Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("ibivibiv/strix-rufipes-70b", torch_dtype="auto", device_config='auto')
tokenizer = AutoTokenizer.from_pretrained("ibivibiv/strix-rufipes-70b")
inputs = tokenizer("### Instruction: Create a plan for developing the game of snake in python using pygame.\n### Response:\n", return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
# Model Details
* **Trained by**: [ibivibiv](https://huggingface.co/ibivibiv)
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **Model type:** **strix-rufipes-70b** is an auto-regressive language model fine tuned on the Llama 2 transformer architecture.
* **Language(s)**: English
* **Purpose**: Has specific training for logic enforcement. This model is targeted towards planning exercises.
# Benchmark Scores
| Test Name | Accuracy |
|-------------------------------------------------------|----------------------|
| average of all | 0.6910894247381432 |
| arc:challenge | 0.674061433447099 |
| hellaswag | 0.6898028281218881 |
| hendrycksTest-abstract_algebra | 0.36 |
| hendrycksTest-anatomy | 0.6370370370370371 |
| hendrycksTest-astronomy | 0.7960526315789473 |
| hendrycksTest-business_ethics | 0.73 |
| hendrycksTest-clinical_knowledge | 0.7169811320754716 |
| hendrycksTest-college_biology | 0.8125 |
| hendrycksTest-college_chemistry | 0.47 |
| hendrycksTest-college_computer_science | 0.56 |
| hendrycksTest-college_mathematics | 0.36 |
| hendrycksTest-college_medicine | 0.6820809248554913 |
| hendrycksTest-college_physics | 0.43137254901960786 |
| hendrycksTest-computer_security | 0.75 |
| hendrycksTest-conceptual_physics | 0.6851063829787234 |
| hendrycksTest-econometrics | 0.4824561403508772 |
| hendrycksTest-electrical_engineering | 0.5793103448275863 |
| hendrycksTest-elementary_mathematics | 0.41534391534391535 |
| hendrycksTest-formal_logic | 0.48412698412698413 |
| hendrycksTest-global_facts | 0.5 |
| hendrycksTest-high_school_biology | 0.8064516129032258 |
| hendrycksTest-high_school_chemistry | 0.5073891625615764 |
| hendrycksTest-high_school_computer_science | 0.71 |
| hendrycksTest-high_school_european_history | 0.8424242424242424 |
| hendrycksTest-high_school_geography | 0.8787878787878788 |
| hendrycksTest-high_school_government_and_politics | 0.9326424870466321 |
| hendrycksTest-high_school_macroeconomics | 0.717948717948718 |
| hendrycksTest-high_school_mathematics | 0.2962962962962963 |
| hendrycksTest-high_school_microeconomics | 0.7521008403361344 |
| hendrycksTest-high_school_physics | 0.48344370860927155 |
| hendrycksTest-high_school_psychology | 0.8788990825688073 |
| hendrycksTest-high_school_statistics | 0.5277777777777778 |
| hendrycksTest-high_school_us_history | 0.9019607843137255 |
| hendrycksTest-high_school_world_history | 0.8776371308016878 |
| hendrycksTest-human_aging | 0.7802690582959642 |
| hendrycksTest-human_sexuality | 0.8244274809160306 |
| hendrycksTest-international_law | 0.8677685950413223 |
| hendrycksTest-jurisprudence | 0.8148148148148148 |
| hendrycksTest-logical_fallacies | 0.7914110429447853 |
| hendrycksTest-machine_learning | 0.5357142857142857 |
| hendrycksTest-management | 0.8543689320388349 |
| hendrycksTest-marketing | 0.8974358974358975 |
| hendrycksTest-medical_genetics | 0.73 |
| hendrycksTest-miscellaneous | 0.8569604086845466 |
| hendrycksTest-moral_disputes | 0.7687861271676301 |
| hendrycksTest-moral_scenarios | 0.5184357541899441 |
| hendrycksTest-nutrition | 0.7679738562091504 |
| hendrycksTest-philosophy | 0.7620578778135049 |
| hendrycksTest-prehistory | 0.8271604938271605 |
| hendrycksTest-professional_accounting | 0.5390070921985816 |
| hendrycksTest-professional_law | 0.5743155149934811 |
| hendrycksTest-professional_medicine | 0.6911764705882353 |
| hendrycksTest-professional_psychology | 0.7565359477124183 |
| hendrycksTest-public_relations | 0.7272727272727273 |
| hendrycksTest-security_studies | 0.8 |
| hendrycksTest-sociology | 0.8507462686567164 |
| hendrycksTest-us_foreign_policy | 0.89 |
| hendrycksTest-virology | 0.5542168674698795 |
| hendrycksTest-world_religions | 0.8596491228070176 |
| truthfulqa | 0.4712300987333333 |
| winogrande | 0.8476716653512234 |
| gsm8k | 0.5382865807429871 |
## Citations
```
@misc{open-llm-leaderboard,
author = {Edward Beeching and Clémentine Fourrier and Nathan Habib and Sheon Han and Nathan Lambert and Nazneen Rajani and Omar Sanseviero and Lewis Tunstall and Thomas Wolf},
title = {Open LLM Leaderboard},
year = {2023},
publisher = {Hugging Face},
howpublished = "\url{https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard}"
}
```
```
@software{eval-harness,
author = {Gao, Leo and
Tow, Jonathan and
Biderman, Stella and
Black, Sid and
DiPofi, Anthony and
Foster, Charles and
Golding, Laurence and
Hsu, Jeffrey and
McDonell, Kyle and
Muennighoff, Niklas and
Phang, Jason and
Reynolds, Laria and
Tang, Eric and
Thite, Anish and
Wang, Ben and
Wang, Kevin and
Zou, Andy},
title = {A framework for few-shot language model evaluation},
month = sep,
year = 2021,
publisher = {Zenodo},
version = {v0.0.1},
doi = {10.5281/zenodo.5371628},
url = {https://doi.org/10.5281/zenodo.5371628}
}
```
```
@misc{clark2018think,
title={Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge},
author={Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord},
year={2018},
eprint={1803.05457},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
```
@misc{zellers2019hellaswag,
title={HellaSwag: Can a Machine Really Finish Your Sentence?},
author={Rowan Zellers and Ari Holtzman and Yonatan Bisk and Ali Farhadi and Yejin Choi},
year={2019},
eprint={1905.07830},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{hendrycks2021measuring,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
year={2021},
eprint={2009.03300},
archivePrefix={arXiv},
primaryClass={cs.CY}
}
```
```
@misc{lin2022truthfulqa,
title={TruthfulQA: Measuring How Models Mimic Human Falsehoods},
author={Stephanie Lin and Jacob Hilton and Owain Evans},
year={2022},
eprint={2109.07958},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{DBLP:journals/corr/abs-1907-10641,
title={{WINOGRANDE:} An Adversarial Winograd Schema Challenge at Scale},
author={Keisuke Sakaguchi and Ronan Le Bras and Chandra Bhagavatula and Yejin Choi},
year={2019},
eprint={1907.10641},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{DBLP:journals/corr/abs-2110-14168,
title={Training Verifiers to Solve Math Word Problems},
author={Karl Cobbe and
Vineet Kosaraju and
Mohammad Bavarian and
Mark Chen and
Heewoo Jun and
Lukasz Kaiser and
Matthias Plappert and
Jerry Tworek and
Jacob Hilton and
Reiichiro Nakano and
Christopher Hesse and
John Schulman},
year={2021},
eprint={2110.14168},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ibivibiv__strix-rufipes-70b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.61|
|AI2 Reasoning Challenge (25-Shot)|71.33|
|HellaSwag (10-Shot) |87.86|
|MMLU (5-Shot) |69.13|
|TruthfulQA (0-shot) |56.72|
|Winogrande (5-shot) |84.77|
|GSM8k (5-shot) |53.83|
|
philz1337x/cyberrealistic-v4.2 | philz1337x | "2024-03-21T14:17:11Z" | 1,343 | 1 | diffusers | [
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-03-21T05:55:58Z" | Entry not found |
lex-hue/Delexa-V0.1-7b | lex-hue | "2024-04-25T03:32:08Z" | 1,343 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"custom_code",
"doi:10.57967/hf/2151",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-06T19:40:50Z" | ---
license: apache-2.0
model-index:
- name: Delexa-V0.1-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.97
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.69
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lex-hue/Delexa-V0.1-7b
name: Open LLM Leaderboard
---
## Delexa-V0.1-7b: Our Newest and Best Model Yet!
We are excited to announce the release of Delexa-V0.1-7b, our newest and best model yet! Delexa-V0.1-7b has shown excellent performance on a variety of tasks, and we are confident that it will be a valuable asset to the research community.
### Eval Results
Delexa-V0.1-7b was evaluated on a dataset of question-answer pairs. The model was given a single question and three different answer choices, and it was tasked with selecting the best answer. Delexa-V0.1-7b achieved an average score of 8.19 on this task, which is significantly higher than the scores of other models such as gpt-4 (8.99), gpt-3.5-turbo (7.94), and claude-v1 (7.90).
Here is a table showing the detailed eval results:
| Model | Turn 1 | Turn 2 | Average |
|---|---|---|---|
| gpt-4 | 8.95625 | 9.0250 | 8.990625 |
| Delexa-V0.1-7b | 8.57500 | 7.8125 | 8.193750 |
| claude-v1 | 8.15000 | 7.6500 | 7.900000 |
| gpt-3.5-turbo | 8.07500 | 7.8125 | 7.943750 |
| vicuna-13b-v1.3 | 6.81250 | 5.9625 | 6.387500 |
| palm-2-chat-bison-001 | 6.71250 | 6.0875 | 6.400000 |

### Technique
One of the key factors that contributed to Delexa-V0.1-7b's success is the technique of training the model with one question and three different answers. This technique allows the model to take into account different perspectives and viewpoints, which leads to more robust and accurate results.
### Future Work
We are excited to continue working on Delexa and to see how it can be further improved. We are currently working on an Instruct model, which is a type of model that can be fine-tuned on specific tasks. We believe that Instruct models have the potential to be even more powerful than Delexa-V0.1-7b, and we are eager to see the results of our ongoing research.
We would like to thank the entire team for their hard work on Delexa-V0.1-7b. We are confident that this model will be a valuable asset to the research community.
### Guardrails:
This Model allows 18+ content and lewd content, but it wont let any illegal content through (unless you jailbreak it)
### Support Our Work and join our Community!:
[Our Patreon](https://patreon.com/Lex_Hue?utm_medium=unknown&utm_source=join_link&utm_campaign=creatorshare_creator&utm_content=copyLink)
[Our Twitter](https://twitter.com/lex_hue)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lex-hue__Delexa-V0.1-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.94|
|AI2 Reasoning Challenge (25-Shot)|66.38|
|HellaSwag (10-Shot) |85.98|
|MMLU (5-Shot) |63.97|
|TruthfulQA (0-shot) |61.69|
|Winogrande (5-shot) |78.06|
|GSM8k (5-shot) |63.53|
|
brittlewis12/Hermes-2-Pro-Llama-3-8B-GGUF | brittlewis12 | "2024-05-05T17:57:16Z" | 1,343 | 2 | null | [
"gguf",
"region:us"
] | null | "2024-05-02T01:33:45Z" | Entry not found |
NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF | NeverSleep | "2024-05-07T11:12:02Z" | 1,343 | 9 | null | [
"gguf",
"not-for-all-audiences",
"nsfw",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-05-05T15:47:36Z" | ---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
## Lumimaid 0.1
<center><div style="width: 100%;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/d3QMaxy3peFTpSlWdWF-k.png" style="display: block; margin: auto;">
</div></center>
This model uses the Llama3 **prompting format**
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
This model includes the new Luminae dataset from Ikari.
This model have received the Orthogonal Activation Steering treatment, meaning it will rarely refuse any request.
If you consider trying this model please give us some feedback either on the Community tab on hf or on our [Discord Server](https://discord.gg/MtCVRWTZXY).
## Credits:
- Undi
- IkariDev
## Description
This repo contains GGUF files of Lumimaid-8B-v0.1-OAS.
Switch: [8B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-GGUF) - [70B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-GGUF) - [70B-alt](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt-GGUF) - [8B-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF) - [70B-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-OAS-GGUF)
## Training data used:
- [Aesir datasets](https://huggingface.co/MinervaAI)
- [NoRobots](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt)
- [limarp](https://huggingface.co/datasets/lemonilia/LimaRP) - 8k ctx
- [toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt)
- [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset
- [Squish42/bluemoon-fandom-1-1-rp-cleaned](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) - 50% (randomly)
- [NobodyExistsOnTheInternet/PIPPAsharegptv2test](https://huggingface.co/datasets/NobodyExistsOnTheInternet/PIPPAsharegptv2test) - 5% (randomly)
- [cgato/SlimOrcaDedupCleaned](https://huggingface.co/datasets/cgato/SlimOrcaDedupCleaned) - 5% (randomly)
- Airoboros (reduced)
- [Capybara](https://huggingface.co/datasets/Undi95/Capybara-ShareGPT/) (reduced)
## Models used (only for 8B)
- Initial LumiMaid 8B Finetune
- Undi95/Llama-3-Unholy-8B-e4
- Undi95/Llama-3-LewdPlay-8B
## Prompt template: Llama3
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
## Others
Undi: If you want to support us, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek |
lxyuan/vit-xray-pneumonia-classification | lxyuan | "2023-09-13T09:34:49Z" | 1,342 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:chest-xray-classification",
"dataset:keremberke/chest-xray-classification",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-06-24T09:44:18Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- chest-xray-classification
- keremberke/chest-xray-classification
metrics:
- accuracy
pipeline_tag: image-classification
base_model: google/vit-base-patch16-224-in21k
model-index:
- name: vit-xray-pneumonia-classification
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: chest-xray-classification
type: chest-xray-classification
config: full
split: validation
args: full
metrics:
- type: accuracy
value: 0.9742489270386266
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-xray-pneumonia-classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the chest-xray-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0868
- Accuracy: 0.9742
## Inference example
```python
from transformers import pipeline
classifier = pipeline(model="lxyuan/vit-xray-pneumonia-classification")
# image taken from https://www.news-medical.net/health/What-is-Viral-Pneumonia.aspx
classifier("https://d2jx2rerrg6sh3.cloudfront.net/image-handler/ts/20200618040600/ri/650/picture/2020/6/shutterstock_786937069.jpg")
>>>
[{'score': 0.990334689617157, 'label': 'PNEUMONIA'},
{'score': 0.009665317833423615, 'label': 'NORMAL'}]
```
## Training procedure
Notebook link: [here](https://github.com/LxYuan0420/nlp/blob/main/notebooks/ViT-xray-classification.ipynb)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
```python
from transformers import EarlyStoppingCallback
training_args = TrainingArguments(
output_dir="vit-xray-pneumonia-classification",
remove_unused_columns=False,
evaluation_strategy="epoch",
save_strategy="epoch",
logging_strategy="epoch",
learning_rate=5e-5,
per_device_train_batch_size=16,
gradient_accumulation_steps=4,
per_device_eval_batch_size=16,
num_train_epochs=15,
save_total_limit=2,
warmup_ratio=0.1,
load_best_model_at_end=True,
metric_for_best_model="eval_loss",
greater_is_better=False,
fp16=True,
push_to_hub=True,
report_to="tensorboard"
)
early_stopping = EarlyStoppingCallback(early_stopping_patience=3)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_ds,
eval_dataset=val_ds,
tokenizer=processor,
compute_metrics=compute_metrics,
callbacks=[early_stopping],
)
```
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5152 | 0.99 | 63 | 0.2507 | 0.9245 |
| 0.2334 | 1.99 | 127 | 0.1766 | 0.9382 |
| 0.1647 | 3.0 | 191 | 0.1218 | 0.9588 |
| 0.144 | 4.0 | 255 | 0.1222 | 0.9502 |
| 0.1348 | 4.99 | 318 | 0.1293 | 0.9571 |
| 0.1276 | 5.99 | 382 | 0.1000 | 0.9665 |
| 0.1175 | 7.0 | 446 | 0.1177 | 0.9502 |
| 0.109 | 8.0 | 510 | 0.1079 | 0.9665 |
| 0.0914 | 8.99 | 573 | 0.0804 | 0.9717 |
| 0.0872 | 9.99 | 637 | 0.0800 | 0.9717 |
| 0.0804 | 11.0 | 701 | 0.0862 | 0.9682 |
| 0.0935 | 12.0 | 765 | 0.0883 | 0.9657 |
| 0.0686 | 12.99 | 828 | 0.0868 | 0.9742 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.9.0+cu102
- Datasets 2.12.0
- Tokenizers 0.13.3 |
TheBloke/zephyr-7B-alpha-GPTQ | TheBloke | "2023-10-14T07:12:11Z" | 1,342 | 27 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"en",
"dataset:stingning/ultrachat",
"dataset:openbmb/UltraFeedback",
"arxiv:2305.18290",
"base_model:HuggingFaceH4/zephyr-7b-alpha",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-10-11T03:26:23Z" | ---
base_model: HuggingFaceH4/zephyr-7b-alpha
datasets:
- stingning/ultrachat
- openbmb/UltraFeedback
inference: false
language:
- en
license: mit
model-index:
- name: zephyr-7b-alpha
results: []
model_creator: Hugging Face H4
model_name: Zephyr 7B Alpha
model_type: mistral
prompt_template: '<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>
'
quantized_by: TheBloke
tags:
- generated_from_trainer
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Zephyr 7B Alpha - GPTQ
- Model creator: [Hugging Face H4](https://huggingface.co/HuggingFaceH4)
- Original model: [Zephyr 7B Alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)
<!-- description start -->
## Description
This repo contains GPTQ model files for [Hugging Face H4's Zephyr 7B Alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/zephyr-7B-alpha-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/zephyr-7B-alpha-GGUF)
* [Hugging Face H4's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Zephyr
```
<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4095 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4095 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4095 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4095 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4095 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4095 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/zephyr-7B-alpha-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/zephyr-7B-alpha-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `zephyr-7B-alpha-GPTQ`:
```shell
mkdir zephyr-7B-alpha-GPTQ
huggingface-cli download TheBloke/zephyr-7B-alpha-GPTQ --local-dir zephyr-7B-alpha-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir zephyr-7B-alpha-GPTQ
huggingface-cli download TheBloke/zephyr-7B-alpha-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir zephyr-7B-alpha-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Huggingface cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir zephyr-7B-alpha-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/zephyr-7B-alpha-GPTQ --local-dir zephyr-7B-alpha-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/zephyr-7B-alpha-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/zephyr-7B-alpha-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `zephyr-7B-alpha-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/zephyr-7B-alpha-GPTQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## How to use this GPTQ model from Python code
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install transformers optimum
pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
```
If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.4.2
pip3 install .
```
### You can then use the following code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/zephyr-7B-alpha-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Hugging Face H4's Zephyr 7B Alpha
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
<img src="https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/resolve/main/thumbnail.png" alt="Zephyr Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for Zephyr 7B Alpha
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-α is the first model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We found that removing the in-built alignment of these datasets boosted performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so and should only be used for educational and research purposes.
## Model description
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **License:** MIT
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/huggingface/alignment-handbook
- **Demo:** https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat
## Intended uses & limitations
The model was initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat) to test its capabilities.
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="HuggingFaceH4/zephyr-7b-alpha", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food!
```
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Zephyr-7B-α has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
## Training and evaluation data
Zephyr 7B Alpha achieves the following results on the evaluation set:
- Loss: 0.4605
- Rewards/chosen: -0.5053
- Rewards/rejected: -1.8752
- Rewards/accuracies: 0.7812
- Rewards/margins: 1.3699
- Logps/rejected: -327.4286
- Logps/chosen: -297.1040
- Logits/rejected: -2.7153
- Logits/chosen: -2.7447
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.5602 | 0.05 | 100 | 0.5589 | -0.3359 | -0.8168 | 0.7188 | 0.4809 | -306.2607 | -293.7161 | -2.6554 | -2.6797 |
| 0.4852 | 0.1 | 200 | 0.5136 | -0.5310 | -1.4994 | 0.8125 | 0.9684 | -319.9124 | -297.6181 | -2.5762 | -2.5957 |
| 0.5212 | 0.15 | 300 | 0.5168 | -0.1686 | -1.1760 | 0.7812 | 1.0074 | -313.4444 | -290.3699 | -2.6865 | -2.7125 |
| 0.5496 | 0.21 | 400 | 0.4835 | -0.1617 | -1.7170 | 0.8281 | 1.5552 | -324.2635 | -290.2326 | -2.7947 | -2.8218 |
| 0.5209 | 0.26 | 500 | 0.5054 | -0.4778 | -1.6604 | 0.7344 | 1.1826 | -323.1325 | -296.5546 | -2.8388 | -2.8667 |
| 0.4617 | 0.31 | 600 | 0.4910 | -0.3738 | -1.5180 | 0.7656 | 1.1442 | -320.2848 | -294.4741 | -2.8234 | -2.8521 |
| 0.4452 | 0.36 | 700 | 0.4838 | -0.4591 | -1.6576 | 0.7031 | 1.1986 | -323.0770 | -296.1796 | -2.7401 | -2.7653 |
| 0.4674 | 0.41 | 800 | 0.5077 | -0.5692 | -1.8659 | 0.7656 | 1.2967 | -327.2416 | -298.3818 | -2.6740 | -2.6945 |
| 0.4656 | 0.46 | 900 | 0.4927 | -0.5279 | -1.6614 | 0.7656 | 1.1335 | -323.1518 | -297.5553 | -2.7817 | -2.8015 |
| 0.4102 | 0.52 | 1000 | 0.4772 | -0.5767 | -2.0667 | 0.7656 | 1.4900 | -331.2578 | -298.5311 | -2.7160 | -2.7455 |
| 0.4663 | 0.57 | 1100 | 0.4740 | -0.8038 | -2.1018 | 0.7656 | 1.2980 | -331.9604 | -303.0741 | -2.6994 | -2.7257 |
| 0.4737 | 0.62 | 1200 | 0.4716 | -0.3783 | -1.7015 | 0.7969 | 1.3232 | -323.9545 | -294.5634 | -2.6842 | -2.7135 |
| 0.4259 | 0.67 | 1300 | 0.4866 | -0.6239 | -1.9703 | 0.7812 | 1.3464 | -329.3312 | -299.4761 | -2.7046 | -2.7356 |
| 0.4935 | 0.72 | 1400 | 0.4747 | -0.5626 | -1.7600 | 0.7812 | 1.1974 | -325.1243 | -298.2491 | -2.7153 | -2.7444 |
| 0.4211 | 0.77 | 1500 | 0.4645 | -0.6099 | -1.9993 | 0.7656 | 1.3894 | -329.9109 | -299.1959 | -2.6944 | -2.7236 |
| 0.4931 | 0.83 | 1600 | 0.4684 | -0.6798 | -2.1082 | 0.7656 | 1.4285 | -332.0890 | -300.5934 | -2.7006 | -2.7305 |
| 0.5029 | 0.88 | 1700 | 0.4595 | -0.5063 | -1.8951 | 0.7812 | 1.3889 | -327.8267 | -297.1233 | -2.7108 | -2.7403 |
| 0.4965 | 0.93 | 1800 | 0.4613 | -0.5561 | -1.9079 | 0.7812 | 1.3518 | -328.0831 | -298.1203 | -2.7226 | -2.7523 |
| 0.4337 | 0.98 | 1900 | 0.4608 | -0.5066 | -1.8718 | 0.7656 | 1.3652 | -327.3599 | -297.1296 | -2.7175 | -2.7469 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.14.0
|
dltjdgh0928/mistral_open_orca_ko | dltjdgh0928 | "2023-10-30T07:54:31Z" | 1,342 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-30T07:50:02Z" | ---
license: apache-2.0
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.