modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
Ammartatox/llamared2-Q4_K_M-GGUF | Ammartatox | 2024-06-26T15:10:39Z | 732 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Ammartatox/llamared2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T15:10:21Z | ---
base_model: Ammartatox/llamared2
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- llama-cpp
- gguf-my-repo
---
# Ammartatox/llamared2-Q4_K_M-GGUF
This model was converted to GGUF format from [`Ammartatox/llamared2`](https://huggingface.co/Ammartatox/llamared2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Ammartatox/llamared2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Ammartatox/llamared2-Q4_K_M-GGUF --hf-file llamared2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Ammartatox/llamared2-Q4_K_M-GGUF --hf-file llamared2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Ammartatox/llamared2-Q4_K_M-GGUF --hf-file llamared2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Ammartatox/llamared2-Q4_K_M-GGUF --hf-file llamared2-q4_k_m.gguf -c 2048
```
|
huggingtweets/mormo_music | huggingtweets | 2021-05-22T15:16:51Z | 731 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
language: en
thumbnail: https://www.huggingtweets.com/mormo_music/1619264382586/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1309110567383322624/_bG1P3yC_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">zeta mask yo (42/?? years) 🤖 AI Bot </div>
<div style="font-size: 15px">@mormo_music bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@mormo_music's tweets](https://twitter.com/mormo_music).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 178 |
| Short tweets | 325 |
| Tweets kept | 2744 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1hjkc8nh/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @mormo_music's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/8guhilo5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/8guhilo5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/mormo_music')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
maelfabien/marcel_customer_service | maelfabien | 2021-04-13T15:43:17Z | 731 | 1 | transformers | [
"transformers",
"pytorch",
"camembert",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | Entry not found |
striki-ai/william-shakespeare-poetry | striki-ai | 2021-06-05T20:25:15Z | 731 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | Entry not found |
skytnt/gpt2-japanese-lyric-small | skytnt | 2023-10-23T12:46:36Z | 731 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"gpt2",
"text-generation",
"ja",
"japanese",
"lm",
"nlp",
"dataset:skytnt/japanese-lyric",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-04-21T04:25:18Z | ---
language: ja
tags:
- ja
- japanese
- gpt2
- text-generation
- lm
- nlp
license: mit
widget:
- text: 桜が咲く
datasets:
- skytnt/japanese-lyric
---
# Japanese GPT2 Lyric Model
## Model description
The model is used to generate Japanese lyrics.
You can try it on my website [https://lyric.fab.moe/](https://lyric.fab.moe/#/)
## How to use
```python
import torch
from transformers import T5Tokenizer, GPT2LMHeadModel
tokenizer = T5Tokenizer.from_pretrained("skytnt/gpt2-japanese-lyric-small")
model = GPT2LMHeadModel.from_pretrained("skytnt/gpt2-japanese-lyric-small")
def gen_lyric(prompt_text: str):
prompt_text = "<s>" + prompt_text.replace("\n", "\\n ")
prompt_tokens = tokenizer.tokenize(prompt_text)
prompt_token_ids = tokenizer.convert_tokens_to_ids(prompt_tokens)
prompt_tensor = torch.LongTensor(prompt_token_ids).to(device)
prompt_tensor = prompt_tensor.view(1, -1)
# model forward
output_sequences = model.generate(
input_ids=prompt_tensor,
max_length=512,
top_p=0.95,
top_k=40,
temperature=1.0,
do_sample=True,
early_stopping=True,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
num_return_sequences=1
)
# convert model outputs to readable sentence
generated_sequence = output_sequences.tolist()[0]
generated_tokens = tokenizer.convert_ids_to_tokens(generated_sequence)
generated_text = tokenizer.convert_tokens_to_string(generated_tokens)
generated_text = "\n".join([s.strip() for s in generated_text.split('\\n')]).replace(' ', '\u3000').replace('<s>', '').replace('</s>', '\n\n---end---')
return generated_text
print(gen_lyric("桜が咲く"))
```
## Training data
[Training data](https://huggingface.co/datasets/skytnt/japanese-lyric/blob/main/lyric_clean.pkl) contains 143,587 Japanese lyrics which are collected from [uta-net](https://www.uta-net.com/) by [lyric_download](https://github.com/SkyTNT/lyric_downlowd) |
metaloopa/DialoGPT-medium-Rintaro | metaloopa | 2022-09-12T16:01:12Z | 731 | 1 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-09-12T14:44:57Z | ---
tags:
- conversational
--- |
timm/xcit_small_12_p16_384.fb_dist_in1k | timm | 2024-02-10T23:43:53Z | 731 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2106.09681",
"license:apache-2.0",
"region:us"
] | image-classification | 2023-04-13T02:25:31Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for xcit_small_12_p16_384.fb_dist_in1k
A XCiT (Cross-Covariance Image Transformer) image classification model. Pretrained on ImageNet-1k with distillation by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 26.3
- GMACs: 14.1
- Activations (M): 36.5
- Image size: 384 x 384
- **Papers:**
- XCiT: Cross-Covariance Image Transformers: https://arxiv.org/abs/2106.09681
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/facebookresearch/xcit
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('xcit_small_12_p16_384.fb_dist_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'xcit_small_12_p16_384.fb_dist_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 577, 384) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{el2021xcit,
title={XCiT: Cross-Covariance Image Transformers},
author={El-Nouby, Alaaeldin and Touvron, Hugo and Caron, Mathilde and Bojanowski, Piotr and Douze, Matthijs and Joulin, Armand and Laptev, Ivan and Neverova, Natalia and Synnaeve, Gabriel and Verbeek, Jakob and others},
journal={arXiv preprint arXiv:2106.09681},
year={2021}
}
```
|
ckandemir/solidity-generator | ckandemir | 2023-11-09T22:27:02Z | 731 | 7 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"en",
"dataset:mwritescode/slither-audited-smart-contracts",
"base_model:codeparrot/codeparrot-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-07T00:32:18Z | ---
license: apache-2.0
base_model: codeparrot/codeparrot-small
tags:
- generated_from_trainer
model-index:
- name: solidity-generator
results: []
datasets:
- mwritescode/slither-audited-smart-contracts
pipeline_tag: text-generation
language:
- en
library_name: transformers
widget:
- text: "contract MyToken is ERC20{"
---
# solidity-generator
This model is a model specialized in generating Solidity contract codes. Derived from the [codeparrot/codeparrot-small](https://huggingface.co/codeparrot/codeparrot-small) model, it's been meticulously trained on an extensive set of Solidity contracts and patterns, making it apt for assisting in drafting or suggesting contract structures.
## Model description
This model has been designed specifically for generating Solidity contracts. Being a derivative of the `codeparrot-small` model, it retains the broader capabilities of the parent model while demonstrating a keen proficiency in understanding and generating Solidity-centric texts.
### Performance
The model reported a loss of `0.2180` on the evaluation set.
## Intended Uses & Limitations
### Intended Uses:
1. Assist developers by auto-generating contract code snippets based on prompts.
2. Help in understanding and drafting complex contract structures.
### Limitations:
1. The generated code must be reviewed for security and functional correctness.
2. The clarity of the generated code largely depends on the specificity of the prompt.
## Training Details
### Dataset
The model was fine-tuned on [mwritescode/slither-audited-smart-contracts](https://huggingface.co/datasets/mwritescode/slither-audited-smart-contracts) dataset comprised of a range of Solidity contracts.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 144
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.302 | 0.35 | 2000 | 0.3237 |
| 0.298 | 0.69 | 4000 | 0.2871 |
| 0.232 | 1.04 | 6000 | 0.2645 |
| 0.2415 | 1.38 | 8000 | 0.2522 |
| 0.2261 | 1.73 | 10000 | 0.2431 |
| 0.1924 | 2.07 | 12000 | 0.2332 |
| 0.1913 | 2.42 | 14000 | 0.2282 |
| 0.2152 | 2.76 | 16000 | 0.2215 |
| 0.1508 | 3.11 | 18000 | 0.2180 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
## How to Use
If you wish to use this model to generate Solidity contract code, follow the steps below:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("ckandemir/solidity_generator")
model = AutoModelForCausalLM.from_pretrained("ckandemir/solidity_generator")
# Input your code prompt
input_text = "contract MyToken is ERC20{"
input_ids = tokenizer.encode(input_text, return_tensors='pt')
sample_output = model.generate(input_ids, do_sample=True, max_length=400, num_return_sequences=1, temperature=0.7)
# Decode and print the generated text
generated_text = tokenizer.decode(sample_output[0], skip_special_tokens=True)
print(generated_text)
```
|
TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF | TheBloke | 2023-09-27T12:48:10Z | 731 | 10 | transformers | [
"transformers",
"gguf",
"llama",
"upstage",
"llama-2",
"instruct",
"instruction",
"text-generation",
"en",
"base_model:upstage/Llama-2-70b-instruct-v2",
"license:llama2",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-06T01:00:16Z | ---
language:
- en
license: llama2
tags:
- upstage
- llama-2
- instruct
- instruction
model_name: Llama 2 70B Instruct v2
base_model: upstage/Llama-2-70b-instruct-v2
inference: false
model_creator: Upstage
model_type: llama
pipeline_tag: text-generation
prompt_template: '### System:
{system_message}
### User:
{prompt}
### Assistant:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 70B Instruct v2 - GGUF
- Model creator: [Upstage](https://huggingface.co/Upstage)
- Original model: [Llama 2 70B Instruct v2](https://huggingface.co/upstage/Llama-2-70b-instruct-v2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Upstage's Llama 2 70B Instruct v2](https://huggingface.co/upstage/Llama-2-70b-instruct-v2).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Upstage-Llama-2-70B-instruct-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Upstage-Llama-2-70B-instruct-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF)
* [Upstage's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/upstage/Llama-2-70b-instruct-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Orca-Hashes
```
### System:
{system_message}
### User:
{prompt}
### Assistant:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [upstage-llama-2-70b-instruct-v2.Q2_K.gguf](https://huggingface.co/TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF/blob/main/upstage-llama-2-70b-instruct-v2.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
| [upstage-llama-2-70b-instruct-v2.Q3_K_S.gguf](https://huggingface.co/TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF/blob/main/upstage-llama-2-70b-instruct-v2.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
| [upstage-llama-2-70b-instruct-v2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF/blob/main/upstage-llama-2-70b-instruct-v2.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
| [upstage-llama-2-70b-instruct-v2.Q3_K_L.gguf](https://huggingface.co/TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF/blob/main/upstage-llama-2-70b-instruct-v2.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
| [upstage-llama-2-70b-instruct-v2.Q4_0.gguf](https://huggingface.co/TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF/blob/main/upstage-llama-2-70b-instruct-v2.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [upstage-llama-2-70b-instruct-v2.Q4_K_S.gguf](https://huggingface.co/TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF/blob/main/upstage-llama-2-70b-instruct-v2.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
| [upstage-llama-2-70b-instruct-v2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF/blob/main/upstage-llama-2-70b-instruct-v2.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
| [upstage-llama-2-70b-instruct-v2.Q5_0.gguf](https://huggingface.co/TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF/blob/main/upstage-llama-2-70b-instruct-v2.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [upstage-llama-2-70b-instruct-v2.Q5_K_S.gguf](https://huggingface.co/TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF/blob/main/upstage-llama-2-70b-instruct-v2.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
| [upstage-llama-2-70b-instruct-v2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF/blob/main/upstage-llama-2-70b-instruct-v2.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
| upstage-llama-2-70b-instruct-v2.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
| upstage-llama-2-70b-instruct-v2.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
### Q6_K and Q8_0 files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
<details>
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
### q6_K
Please download:
* `upstage-llama-2-70b-instruct-v2.Q6_K.gguf-split-a`
* `upstage-llama-2-70b-instruct-v2.Q6_K.gguf-split-b`
### q8_0
Please download:
* `upstage-llama-2-70b-instruct-v2.Q8_0.gguf-split-a`
* `upstage-llama-2-70b-instruct-v2.Q8_0.gguf-split-b`
To join the files, do the following:
Linux and macOS:
```
cat upstage-llama-2-70b-instruct-v2.Q6_K.gguf-split-* > upstage-llama-2-70b-instruct-v2.Q6_K.gguf && rm upstage-llama-2-70b-instruct-v2.Q6_K.gguf-split-*
cat upstage-llama-2-70b-instruct-v2.Q8_0.gguf-split-* > upstage-llama-2-70b-instruct-v2.Q8_0.gguf && rm upstage-llama-2-70b-instruct-v2.Q8_0.gguf-split-*
```
Windows command line:
```
COPY /B upstage-llama-2-70b-instruct-v2.Q6_K.gguf-split-a + upstage-llama-2-70b-instruct-v2.Q6_K.gguf-split-b upstage-llama-2-70b-instruct-v2.Q6_K.gguf
del upstage-llama-2-70b-instruct-v2.Q6_K.gguf-split-a upstage-llama-2-70b-instruct-v2.Q6_K.gguf-split-b
COPY /B upstage-llama-2-70b-instruct-v2.Q8_0.gguf-split-a + upstage-llama-2-70b-instruct-v2.Q8_0.gguf-split-b upstage-llama-2-70b-instruct-v2.Q8_0.gguf
del upstage-llama-2-70b-instruct-v2.Q8_0.gguf-split-a upstage-llama-2-70b-instruct-v2.Q8_0.gguf-split-b
```
</details>
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF and below it, a specific filename to download, such as: upstage-llama-2-70b-instruct-v2.q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub>=0.17.1
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF upstage-llama-2-70b-instruct-v2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF upstage-llama-2-70b-instruct-v2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m upstage-llama-2-70b-instruct-v2.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\n{system_message}\n\n### User:\n{prompt}\n\n### Assistant:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model from Python using ctransformers
#### First install the package
```bash
# Base ctransformers with no GPU acceleration
pip install ctransformers>=0.2.24
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]>=0.2.24
# Or with ROCm GPU acceleration
CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems
CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
```
#### Simple example code to load one of these GGUF models
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Upstage-Llama-2-70B-instruct-v2-GGUF", model_file="upstage-llama-2-70b-instruct-v2.q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here's guides on using llama-cpp-python or ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Upstage's Llama 2 70B Instruct v2
# Updates
Solar, a new bot created by Upstage, is now available on **Poe**. As a top-ranked model on the HuggingFace Open LLM leaderboard, and a fine tune of Llama 2, Solar is a great example of the progress enabled by open source.
Try now at https://poe.com/Solar-0-70b
# SOLAR-0-70b-16bit model card
The model name has been changed from LLaMa-2-70b-instruct-v2 to SOLAR-0-70b-16bit
## Model Details
* **Developed by**: [Upstage](https://en.upstage.ai)
* **Backbone Model**: [LLaMA-2](https://github.com/facebookresearch/llama/tree/main)
* **Language(s)**: English
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
* **License**: Fine-tuned checkpoints is licensed under the Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
* **Where to send comments**: Instructions on how to provide feedback or comments on a model can be found by opening an issue in the [Hugging Face community's model repository](https://huggingface.co/upstage/Llama-2-70b-instruct-v2/discussions)
* **Contact**: For questions and comments about the model, please email [[email protected]](mailto:[email protected])
## Dataset Details
### Used Datasets
- Orca-style dataset
- Alpaca-style dataset
- No other dataset was used except for the dataset mentioned above
- No benchmark test set or the training set are used
### Prompt Template
```
### System:
{System}
### User:
{User}
### Assistant:
{Assistant}
```
## Usage
- The followings are tested on A100 80GB
- Our model can handle up to 10k+ input tokens, thanks to the `rope_scaling` option
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
tokenizer = AutoTokenizer.from_pretrained("upstage/Llama-2-70b-instruct-v2")
model = AutoModelForCausalLM.from_pretrained(
"upstage/Llama-2-70b-instruct-v2",
device_map="auto",
torch_dtype=torch.float16,
load_in_8bit=True,
rope_scaling={"type": "dynamic", "factor": 2} # allows handling of longer inputs
)
prompt = "### User:\nThomas is healthy, but he has to go to the hospital. What could be the reasons?\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
del inputs["token_type_ids"]
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf'))
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
```
## Hardware and Software
* **Hardware**: We utilized an A100x8 * 4 for training our model
* **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index)
## Evaluation Results
### Overview
- We conducted a performance evaluation following the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`
We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
- We used [MT-bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge), a set of challenging multi-turn open-ended questions, to evaluate the models
### Main Results
| Model | H4(Avg) | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
|--------------------------------------------------------------------|----------|----------|----------|------|----------|-|-------------|
| **[Llama-2-70b-instruct-v2](https://huggingface.co/upstage/Llama-2-70b-instruct-v2)**(***Ours***, ***Open LLM Leaderboard***) | **73** | **71.1** | **87.9** | **70.6** | **62.2** | | **7.44063** |
| [Llama-2-70b-instruct](https://huggingface.co/upstage/Llama-2-70b-instruct) (Ours, Open LLM Leaderboard) | 72.3 | 70.9 | 87.5 | 69.8 | 61 | | 7.24375 |
| [llama-65b-instruct](https://huggingface.co/upstage/llama-65b-instruct) (Ours, Open LLM Leaderboard) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 | | |
| Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | 69.8 | 44.9 | | |
| [llama-30b-instruct-2048](https://huggingface.co/upstage/llama-30b-instruct-2048) (Ours, Open LLM Leaderboard) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 | | |
| [llama-30b-instruct](https://huggingface.co/upstage/llama-30b-instruct) (Ours, Open LLM Leaderboard) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 | | |
| llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | | |
| falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | |
### Scripts for H4 Score Reproduction
- Prepare evaluation environments:
```
# clone the repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# check out the specific commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# change to the repository directory
cd lm-evaluation-harness
```
## Contact Us
### About Upstage
- [Upstage](https://en.upstage.ai) is a company specialized in Large Language Models (LLMs) and AI. We will help you build private LLMs and related applications.
If you have a dataset to build domain specific LLMs or make LLM applications, please contact us at ► [click here to contact](https://www.upstage.ai/private-llm?utm_source=huggingface&utm_medium=link&utm_campaign=privatellm)
- As of August 1st, our 70B model has reached the top spot in openLLM rankings, marking itself as the current leading performer globally.
<!-- original-model-card end -->
|
facebook/metaclip-h14-fullcc2.5b | facebook | 2024-01-11T19:37:30Z | 731 | 30 | transformers | [
"transformers",
"pytorch",
"safetensors",
"clip",
"zero-shot-image-classification",
"vision",
"metaclip",
"arxiv:2309.16671",
"arxiv:2103.00020",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | 2023-10-09T20:33:38Z | ---
license: cc-by-nc-4.0
tags:
- vision
- metaclip
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
---
# MetaCLIP model, huge-sized version, patch resolution 14
MetaCLIP model applied to 2.5 billion data points of CommonCrawl (CC). It was introduced in the paper [Demystifying CLIP Data](https://arxiv.org/abs/2309.16671) by Xu et al. and first released in [this repository](https://github.com/facebookresearch/MetaCLIP).
Disclaimer: The team releasing MetaCLIP did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The [Demystifying CLIP Data](https://arxiv.org/abs/2309.16671) paper aims to reveal CLIP’s method around training data curation. OpenAI never open-sourced code regarding their data preparation pipeline.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/clip_overview.jpg"
alt="drawing" width="600"/>
<small> CLIP high-level overview. Taken from the <a href="https://arxiv.org/abs/2103.00020">CLIP paper</a>. </small>
## Intended uses & limitations
You can use the raw model for linking images with text in a shared embedding space. This enables things like zero-shot image classification, text-based image retrieval, image-based text retrieval, etc.
### How to use
We refer to the [docs](https://huggingface.co/docs/transformers/main/en/model_doc/clip#usage). Just replace the names of the models on the hub.
### BibTeX entry and citation info
```bibtex
@misc{xu2023demystifying,
title={Demystifying CLIP Data},
author={Hu Xu and Saining Xie and Xiaoqing Ellen Tan and Po-Yao Huang and Russell Howes and Vasu Sharma and Shang-Wen Li and Gargi Ghosh and Luke Zettlemoyer and Christoph Feichtenhofer},
year={2023},
eprint={2309.16671},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
``` |
jonomon/gpt3-kor-small_based_on_gpt2_core_ml | jonomon | 2023-12-28T04:55:15Z | 731 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"coreml",
"gpt2",
"text-generation",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-12T21:35:46Z | ---
language: ko
tags:
- text-generation
---
# Bert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import BertTokenizerFast, GPT2LMHeadModel
tokenizer_gpt3 = BertTokenizerFast.from_pretrained("kykim/gpt3-kor-small_based_on_gpt2")
input_ids = tokenizer_gpt3.encode("text to tokenize")[1:] # remove cls token
model_gpt3 = GPT2LMHeadModel.from_pretrained("kykim/gpt3-kor-small_based_on_gpt2")
``` |
EmbeddedLLM/Mistral-7B-Merge-14-v0.3-ft-step-9984 | EmbeddedLLM | 2024-01-20T16:07:19Z | 731 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"merge",
"fine-tuned",
"en",
"dataset:cognitivecomputations/dolphin",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:teknium/openhermes",
"dataset:migtissera/Synthia-v1.3",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:ehartford/dolphin-2.2.1-mistral-7b",
"base_model:SciPhi/SciPhi-Mistral-7B-32k",
"base_model:ehartford/samantha-1.2-mistral-7b",
"base_model:Arc53/docsgpt-7b-mistral",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:meta-math/MetaMath-Mistral-7B",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"base_model:openchat/openchat-3.5-1210",
"base_model:beowolx/MistralHermes-CodePro-7B-v1",
"base_model:TIGER-Lab/MAmmoTH-7B-Mistral",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-04T04:02:42Z | ---
license: apache-2.0
datasets:
- cognitivecomputations/dolphin
- cognitivecomputations/dolphin-coder
- ise-uiuc/Magicoder-OSS-Instruct-75K
- teknium/openhermes
- migtissera/Synthia-v1.3
language:
- en
tags:
- merge
- fine-tuned
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
- ehartford/dolphin-2.2.1-mistral-7b
- SciPhi/SciPhi-Mistral-7B-32k
- ehartford/samantha-1.2-mistral-7b
- Arc53/docsgpt-7b-mistral
- HuggingFaceH4/zephyr-7b-beta
- meta-math/MetaMath-Mistral-7B
- Open-Orca/Mistral-7B-OpenOrca
- openchat/openchat-3.5-1210
- beowolx/MistralHermes-CodePro-7B-v1
- TIGER-Lab/MAmmoTH-7B-Mistral
- teknium/OpenHermes-2.5-Mistral-7B
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# Model Description
This is fine-tuned model based on EmbeddedLLM/Mistral-7B-Merge-14-v0.3 for 9984 steps.
The dataset used are:
* dophin
* dolphin-coder
* Magicoder-OSS-Instruct-75K
* openhermes
* Synthia-v1.3
## Chat Template
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Training
The model is scheduled to be fine-tuned for 3 epochs on 4 A100s using axolotl.
# Shout-Out to OSS
Thank you to the Open Source AI community for bringing together marvelous code frameworks and datasets. |
sethuiyer/Chikuma_10.7B | sethuiyer | 2024-03-07T07:20:40Z | 731 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"en",
"base_model:sethuiyer/SynthIQ-7b",
"base_model:openchat/openchat-3.5-0106",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-11T05:00:26Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- merge
base_model:
- sethuiyer/SynthIQ-7b
- openchat/openchat-3.5-0106
pipeline_tag: text-generation
model-index:
- name: Chikuma_10.7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Chikuma_10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.31
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Chikuma_10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.81
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Chikuma_10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.01
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Chikuma_10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Chikuma_10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.62
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Chikuma_10.7B
name: Open LLM Leaderboard
---
## NOTE: For experimental purposes
<p align="center">
<img src="https://huggingface.co/sethuiyer/Chikuma/resolve/main/chikuma.webp" height="256px" alt="Chikuma">
</p>
Chikuma is a 10.7B parameter model and is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [sethuiyer/SynthIQ-7b](https://huggingface.co/sethuiyer/SynthIQ-7b)
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
The name "Chikuma" is inspired by the [Chikuma River](https://en.wikipedia.org/wiki/Shinano_River), the longest in Japan, known for its continuous flow and meandering path.
This metaphorically represents the model's depth, fluidity, and adaptability in processing and understanding language.
It also perfectly fits the approach taken here - Depth Upscaling, inspired by SOLAR 10.7B.
## Nous LLM Evaluation (with ChatML Prompt Template)
| Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
|---------------------------|---------|----------|------------|-----------|---------|
| SynthIQ-7b | 42.67 | 73.71 | 56.51 | **44.59** | **54.37** |
| openchat/openchat-3.5-0106 | **44.17** | **73.72** | 52.53 | 44.4 | 53.71 |
| Chikuma_10.7B | 42.41 | 73.41 | **56.69** | 43.5 | 54 |
More details can be found [here](https://gist.github.com/sethuiyer/08b4498ed13a6dead38ad3a6f12e349a)
### Recommended Prompt Template (Experimental)
```text
<|im_start|>GPT4 Correct system
You are Chikuma, a constantly learning AI assistant who strives to be
insightful, engaging, and helpful. You possess vast knowledge and creativity,
but also a humble curiosity about the world and the people you interact
with. If you don't know the answer to a question, please don't share false information.
Always use <|end_of_turn|> when you want to end the answer.<|im_end|>
<|im_start|>GPT4 Correct User:
{{Input}}
<|im_end|>GPT4 Correct Assistant:
```
ChatML also works, but make sure to add the sentence "Always use <|end_of_turn|> when you want to end the answer" as the default eos token is <|end_of_turn|>.
## Tested to work well in :
1. [text-generation-webui](https://github.com/oobabooga/text-generation-webui), LLaMa-Precise sampling settings.
2. `transformers` text generation pipeline, temperature=4.0, top_k=50, top_p=0.01.
## 🧩 Configuration
```yaml
slices:
- sources:
- model: sethuiyer/SynthIQ-7b
layer_range: [0, 24]
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
## Ollama:
Chikuma is on Ollama. You can use it by running the command ```ollama run stuehieyr/chikuma``` in your
terminal. If you have limited computing resources, check out this [video](https://www.youtube.com/watch?v=Qa1h7ygwQq8) to learn how to run it on
a Google Colab backend.
## 💻 Usage
```python
sys_message = '''
You are Chikuma, a constantly learning AI assistant who strives to be
insightful, engaging, and helpful. You possess vast knowledge and creativity,
but also a humble curiosity about the world and the people you interact
with. If you don't know the answer to a question, please don't share false information.
Always use <|end_of_turn|> when you want to end the answer.
'''
question = '''
Tell me what is a large language model in under 250 words.
'''
messages = [{"role":"system", "content": sys_message}, {"role": "user", "content": question}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=4.0, top_k=50, top_p=0.01)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sethuiyer__Chikuma_10.7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.17|
|AI2 Reasoning Challenge (25-Shot)|65.70|
|HellaSwag (10-Shot) |84.31|
|MMLU (5-Shot) |64.81|
|TruthfulQA (0-shot) |57.01|
|Winogrande (5-shot) |79.56|
|GSM8k (5-shot) |57.62|
|
dfurman/GarrulusMarcoro-7B-v0.1 | dfurman | 2024-03-23T03:53:10Z | 731 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"udkai/Garrulus",
"mlabonne/NeuralMarcoro14-7B",
"base_model:udkai/Garrulus",
"base_model:mlabonne/NeuralMarcoro14-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-11T05:39:17Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- udkai/Garrulus
- mlabonne/NeuralMarcoro14-7B
base_model:
- udkai/Garrulus
- mlabonne/NeuralMarcoro14-7B
model-index:
- name: GarrulusMarcoro-7B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.35
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/GarrulusMarcoro-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.0
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/GarrulusMarcoro-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.65
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/GarrulusMarcoro-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.05
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/GarrulusMarcoro-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 87.21
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/GarrulusMarcoro-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/GarrulusMarcoro-7B-v0.1
name: Open LLM Leaderboard
---
# GarrulusMarcoro-7B-v0.1
GarrulusMarcoro-7B-v0.1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [udkai/Garrulus](https://huggingface.co/udkai/Garrulus)
* [mlabonne/NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: udkai/Garrulus
layer_range: [0, 32]
- model: mlabonne/NeuralMarcoro14-7B
layer_range: [0, 32]
merge_method: slerp
base_model: udkai/Garrulus
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "dfurman/GarrulusMarcoro-7B-v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dfurman__GarrulusMarcoro-7B-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.20|
|AI2 Reasoning Challenge (25-Shot)|72.35|
|HellaSwag (10-Shot) |88.00|
|MMLU (5-Shot) |64.65|
|TruthfulQA (0-shot) |67.05|
|Winogrande (5-shot) |87.21|
|GSM8k (5-shot) |65.96|
|
Praneeth/StarMix-7B-slerp | Praneeth | 2024-03-24T19:41:39Z | 731 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"berkeley-nest/Starling-LM-7B-alpha",
"mistralai/Mistral-7B-Instruct-v0.2",
"conversational",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-11T20:33:01Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- berkeley-nest/Starling-LM-7B-alpha
- mistralai/Mistral-7B-Instruct-v0.2
base_model:
- berkeley-nest/Starling-LM-7B-alpha
- mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: StarMix-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.36
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Praneeth/StarMix-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.1
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Praneeth/StarMix-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.57
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Praneeth/StarMix-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.81
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Praneeth/StarMix-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Praneeth/StarMix-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 53.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Praneeth/StarMix-7B-slerp
name: Open LLM Leaderboard
---
# StarMix-7B-slerp
StarMix-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: berkeley-nest/Starling-LM-7B-alpha
layer_range: [0, 32]
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Praneeth__StarMix-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.41|
|AI2 Reasoning Challenge (25-Shot)|65.36|
|HellaSwag (10-Shot) |85.10|
|MMLU (5-Shot) |62.57|
|TruthfulQA (0-shot) |57.81|
|Winogrande (5-shot) |79.95|
|GSM8k (5-shot) |53.68|
|
Technoculture/Medchator-2x7b | Technoculture | 2024-02-10T10:46:48Z | 731 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"AdaptLLM/medicine-chat",
"microsoft/Orca-2-7b",
"dataset:open-llm-leaderboard/details_Technoculture__Medchator-2x7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-16T19:48:31Z | ---
license: apache-2.0
tags:
- moe
- merge
- AdaptLLM/medicine-chat
- microsoft/Orca-2-7b
datasets:
- open-llm-leaderboard/details_Technoculture__Medchator-2x7b
---
# Medchator-2x7b
Medchator-2x7b is a Mixure of Experts (MoE) made with the following models:
* [AdaptLLM/medicine-chat](https://huggingface.co/AdaptLLM/medicine-chat)
* [microsoft/Orca-2-7b](https://huggingface.co/microsoft/Orca-2-7b)
## Evaluations
# Open LLM Leaderboard

| Model Name | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| ------------------ | -------- | --------- | -------- | ---------- | ---------- | -------- |
| Orca-2-7b | **78.4** | 76.1 | 53.7 | **52.4** | 74.2 | **47.2** |
| LLAMA-2-7b | 43.2 | 77.1 | 44.4 | 38.7 | 69.5 | 16 |
| MT7Bi-sft | 54.1 | 75.11 | - | 43.08 | 72.14 | 15.54 |
| MT7bi-dpo | 54.69 | 75.89 | 52.82 | 45.48 | 71.58 | 25.93 |
| Medorca-2x7b | 54.1 | 76.04 | 54.1 | 48.04 | 74.51 | 20.64 |
| Medchator-2x7b | **57.59**| **78.14** | **56.13**| **48.77** | **75.3** | **32.83**|
## Medical Performance
Clinical Camel demonstrates competitive performance on medical benchmarks.
**Table: Five-Shot Performance of GPT3.5, llama-2-7b and Llama-2-70b on Various Medical Datasets**
| Dataset | Medchator-2x7b | GPT3.5 | Llama-2 7b | Llama-2 70b |
|----------------------------|----------------|--------|------------|-------------|
| MMLU Anatomy | 56.3 | 60.7 | 48.9 | 62.9 |
| MMLU Clinical Knowledge | 63.0 | 68.7 | 46.0 | 71.7 |
| MMLU College Biology | 63.8 | 72.9 | 47.2 | 84.7 |
| MMLU College Medicine | 50.9 | 63.6 | 42.8 | 64.2 |
| MMLU Medical Genetics | 67.0 | 68.0 | 55.0 | 74.0 |
| MMLU Professional Medicine | 55.1 | 69.8 | 53.6 | 75.0 |
## 🧩 Configuration
```yaml
base_model: microsoft/Orca-2-7b
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: AdaptLLM/medicine-chat
positive_prompts:
- "How does sleep affect cardiovascular health?"
- "Could a plant-based diet improve arthritis symptoms?"
- "A patient comes in with symptoms of dizziness and nausea"
- "When discussing diabetes management, the key factors to consider are"
- "The differential diagnosis for a headache with visual aura could include"
negative_prompts:
- "Recommend a good recipe for a vegetarian lasagna."
- "Give an overview of the French Revolution."
- "Explain how a digital camera captures an image."
- "What are the environmental impacts of deforestation?"
- "The recent advancements in artificial intelligence have led to developments in"
- "The fundamental concepts in economics include ideas like supply and demand, which explain"
- source_model: microsoft/Orca-2-7b
positive_prompts:
- "Here is a funny joke for you -"
- "When considering the ethical implications of artificial intelligence, one must take into account"
- "In strategic planning, a company must analyze its strengths and weaknesses, which involves"
- "Understanding consumer behavior in marketing requires considering factors like"
- "The debate on climate change solutions hinges on arguments that"
negative_prompts:
- "In discussing dietary adjustments for managing hypertension, it's crucial to emphasize"
- "For early detection of melanoma, dermatologists recommend that patients regularly check their skin for"
- "Explaining the importance of vaccination, a healthcare professional should highlight"
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Technoculture/Medchator-2x7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
eren23/slerp-test-turdus-beagle | eren23 | 2024-03-05T15:46:00Z | 731 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"udkai/Turdus",
"mlabonne/NeuralBeagle14-7B",
"base_model:udkai/Turdus",
"base_model:mlabonne/NeuralBeagle14-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-16T21:14:27Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- udkai/Turdus
- mlabonne/NeuralBeagle14-7B
base_model:
- udkai/Turdus
- mlabonne/NeuralBeagle14-7B
model-index:
- name: slerp-test-turdus-beagle
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/slerp-test-turdus-beagle
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.85
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/slerp-test-turdus-beagle
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.62
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/slerp-test-turdus-beagle
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 69.69
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/slerp-test-turdus-beagle
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/slerp-test-turdus-beagle
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/slerp-test-turdus-beagle
name: Open LLM Leaderboard
---
# slerp-test-turdus-beagle
slerp-test-turdus-beagle is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [udkai/Turdus](https://huggingface.co/udkai/Turdus)
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
## GGUF Version
[Link](https://huggingface.co/eren23/slerp-test-turdus-beagle-GGUF)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: udkai/Turdus
layer_range: [0, 32]
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "eren23/slerp-test-turdus-beagle"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_eren23__slerp-test-turdus-beagle)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.11|
|AI2 Reasoning Challenge (25-Shot)|73.55|
|HellaSwag (10-Shot) |88.85|
|MMLU (5-Shot) |64.62|
|TruthfulQA (0-shot) |69.69|
|Winogrande (5-shot) |83.90|
|GSM8k (5-shot) |70.05|
|
RubielLabarta/LogoS-7Bx2-MoE-13B-v0.2 | RubielLabarta | 2024-03-14T13:04:28Z | 731 | 10 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"en",
"es",
"base_model:yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B",
"base_model:TomGrc/FusionNet_7Bx2_MoE_14B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-21T14:08:50Z | ---
language:
- en
- es
license: apache-2.0
tags:
- moe
- merge
base_model:
- yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B
- TomGrc/FusionNet_7Bx2_MoE_14B
model-index:
- name: LogoS-7Bx2-MoE-13B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 74.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RubielLabarta/LogoS-7Bx2-MoE-13B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.07
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RubielLabarta/LogoS-7Bx2-MoE-13B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RubielLabarta/LogoS-7Bx2-MoE-13B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 74.57
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RubielLabarta/LogoS-7Bx2-MoE-13B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 88.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RubielLabarta/LogoS-7Bx2-MoE-13B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.65
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RubielLabarta/LogoS-7Bx2-MoE-13B-v0.1
name: Open LLM Leaderboard
---
# LogoS-7Bx2-MoE-13B-v0.1
Model built by @RubielLabarta using SLERP merge method. The model is release for research purposes only, commercial use is not allowed.
The LogoS is a model to experiment with the MoE method, which could significantly increase the performance of the original model. The model has 12.9B parameters.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_RubielLabarta__LogoS-7Bx2-MoE-13B-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |77.14|
|AI2 Reasoning Challenge (25-Shot)|74.49|
|HellaSwag (10-Shot) |89.07|
|MMLU (5-Shot) |64.74|
|TruthfulQA (0-shot) |74.57|
|Winogrande (5-shot) |88.32|
|GSM8k (5-shot) |71.65|
|
wang7776/Llama-2-7b-chat-hf-20-attention-sparsity | wang7776 | 2024-02-05T18:25:58Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"conversational",
"en",
"arxiv:2306.11695",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-25T16:27:44Z | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta
website](https://ai.meta.com/resources/models-and-libraries/llama-downloads)
and accept our license terms and acceptable use policy before submitting this
form. Requests will be processed in 1-2 days.
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
license: other
---
# Overview
This model has been pruned to 20% sparsity using the [Wanda pruning method](https://arxiv.org/abs/2306.11695) on attention layers. This method requires no retraining or weight updates and still achieves competitive performance. A link to the base model can be found [here](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)| |
wang7776/vicuna-7b-v1.3-attention-sparsity-10 | wang7776 | 2024-02-05T18:21:53Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2306.11695",
"arxiv:2302.13971",
"arxiv:2306.05685",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-26T20:52:38Z | ---
inference: false
license: apache-2.0
---
# Overview
This model has been pruned to 10% sparsity using the [Wanda pruning method](https://arxiv.org/abs/2306.11695) on attention layers. This method requires no retraining or weight updates and still achieves competitive performance. A link to the base model can be found [here](https://huggingface.co/lmsys/vicuna-7b-v1.3).
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
- Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
- APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 125K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md) |
FelixChao/Sectumsempra-7B-DPO | FelixChao | 2024-01-27T10:25:58Z | 731 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-27T09:10:25Z | ---
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Sharathhebbar24/ssh_1.8B | Sharathhebbar24 | 2024-03-14T15:17:16Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-27T15:12:43Z | ---
language:
- en
license: apache-2.0
pipeline_tag: text-generation
model-index:
- name: ssh_1.8B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 39.08
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/ssh_1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 62.37
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/ssh_1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 44.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/ssh_1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 43.15
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/ssh_1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/ssh_1.8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 27.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/ssh_1.8B
name: Open LLM Leaderboard
---
Sharathhebbar24/ssh_1.8B is a 1.8B model
The model is a modified version of [qnguyen3/quan-1.8b-chat](https://huggingface.co/qnguyen3/quan-1.8b-chat)
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Sharathhebbar24__ssh_1.8B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |45.91|
|AI2 Reasoning Challenge (25-Shot)|39.08|
|HellaSwag (10-Shot) |62.37|
|MMLU (5-Shot) |44.09|
|TruthfulQA (0-shot) |43.15|
|Winogrande (5-shot) |59.27|
|GSM8k (5-shot) |27.52|
|
namirocks/mistral-class-tutor-7b-ep3 | namirocks | 2024-01-28T03:39:03Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T03:19:43Z | ---
library_name: transformers
license: llama2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
marcel/phi-2-openhermes-30k | marcel | 2024-04-19T19:18:10Z | 731 | 0 | mlx | [
"mlx",
"safetensors",
"phi",
"nlp",
"code",
"text-generation",
"custom_code",
"en",
"dataset:teknium/openhermes",
"license:mit",
"model-index",
"region:us"
] | text-generation | 2024-01-30T02:43:15Z | ---
language:
- en
license: mit
tags:
- nlp
- code
- mlx
datasets:
- teknium/openhermes
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
pipeline_tag: text-generation
model-index:
- name: phi-2-openhermes-30k
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.01
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=marcel/phi-2-openhermes-30k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 74.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=marcel/phi-2-openhermes-30k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.17
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=marcel/phi-2-openhermes-30k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 45.38
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=marcel/phi-2-openhermes-30k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=marcel/phi-2-openhermes-30k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 49.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=marcel/phi-2-openhermes-30k
name: Open LLM Leaderboard
---
# marcel/phi-2-openhermes-30k
This model was converted to MLX format from [`microsoft/phi-2`]().
Refer to the [original model card](https://huggingface.co/microsoft/phi-2) for more details on the model.
## Use with mlx
```bash
pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model marcel/phi-2-openhermes-30k --prompt "My name is"
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"marcel/phi-2-openhermes-30k",
low_cpu_mem_usage=True,
device_map="auto",
trust_remote_code=True,
torch_dtype=torch.float16,
)
tokenizer = AutoTokenizer.from_pretrained("phi-2-openhermes-30k")
input_text = "### Human: Give me a good recipe for a chinese dish\n\n### Assistant:"
outputs = model.generate(
tokenizer(input_text, return_tensors="pt").to(model.device)['input_ids'],
max_length=1024,
temperature=0.7,
top_p=0.9,
do_sample=True,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_marcel__phi-2-openhermes-30k)
| Metric |Value|
|---------------------------------|----:|
|Avg. |60.37|
|AI2 Reasoning Challenge (25-Shot)|61.01|
|HellaSwag (10-Shot) |74.72|
|MMLU (5-Shot) |57.17|
|TruthfulQA (0-shot) |45.38|
|Winogrande (5-shot) |74.90|
|GSM8k (5-shot) |49.05|
|
kwchoi/DPO_mistral_v01_7b_ultra_0130_1k | kwchoi | 2024-03-06T01:44:38Z | 731 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:argilla/ultrafeedback-binarized-preferences-cleaned",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-30T07:04:31Z | ---
language:
- en
license: apache-2.0
datasets:
- argilla/ultrafeedback-binarized-preferences-cleaned
model-index:
- name: DPO_mistral_v01_7b_ultra_0130_1k
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 57.17
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 79.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.62
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.31
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k
name: Open LLM Leaderboard
---
Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kwchoi__DPO_mistral_v01_7b_ultra_0130_1k)
| Metric |Value|
|---------------------------------|----:|
|Avg. |57.83|
|AI2 Reasoning Challenge (25-Shot)|57.17|
|HellaSwag (10-Shot) |79.16|
|MMLU (5-Shot) |55.85|
|TruthfulQA (0-shot) |55.62|
|Winogrande (5-shot) |72.85|
|GSM8k (5-shot) |26.31|
|
rizla/rizla55b | rizla | 2024-02-02T03:13:07Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dpo",
"conversational",
"license:cc-by-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-02T02:43:10Z | ---
license: cc-by-nd-4.0
base_model: []
tags:
- dpo
---
# This is an experimental model that I made by merging two Llama2 70b models and gluing them together with the mergekit of llama70b. The mergekit is a tool that lets me mix and match different models into one big model, keeping all the smarts and skills of the original models. The llama70b is a huge language model that can make words for all kinds of things and ways, based on the GPT-4 thingy.
The merged model has 55 billion parameters and was made trained on 640GB of vram cluster |
Xenon1/Xenon-2 | Xenon1 | 2024-02-04T04:51:38Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Xenon-2",
"conversational",
"en",
"arxiv:2401.10020",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-04T04:45:25Z | ---
language:
- en
license: apache-2.0
tags:
- mistral
- Xenon-2
pipeline_tag: text-generation
---
# Model Card for Xenon-2
Mistral-7B-v0.1 model fine-tuned on the Ultrafeedback dataset using techinques shown in the paper [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Xenon1/Xenon-2")
tokenizer = AutoTokenizer.from_pretrained("Xenon1/Xenon-2")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer |
OpenBuddy/openbuddy-deepseek-67b-v15.3-4k | OpenBuddy | 2024-02-05T10:51:23Z | 731 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"fi",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-04T05:11:46Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
pipeline_tag: text-generation
inference: false
library_name: transformers
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/deepseek-ai/deepseek-llm-67b-base
License: [deepseek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL)
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
Kquant03/Nanashi-2x7B-bf16 | Kquant03 | 2024-02-29T02:37:24Z | 731 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"en",
"arxiv:2101.03961",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-04T05:40:39Z | ---
license: apache-2.0
language:
- en
tags:
- moe
- merge
---

# Nameless.
[Join our Discord!](https://discord.gg/ZgU79QDnE2)
A frankenMoE using zero prompting at all.
The basic config looks like this...(detailed version is in the files and versions):
- [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) - base
- [alnrg2arg/blockchainlabs_test3_seminar](https://huggingface.co/alnrg2arg/blockchainlabs_test3_seminar) - expert #1
- [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) - expert #2
# I'll upload it to evals if Azathoth-16x7B does good, since this has a similar idea behind it.
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously. |
ericpolewski/Palworld-SME-13b | ericpolewski | 2024-02-05T06:22:11Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-05T05:38:02Z | ---
license: cc-by-sa-3.0
---
This is a Subject Matter Expert (SME) bot trained on the Palworld Fandom Wiki as a test of a new SME model pipeline. There's no RAG. All information is embedded in the model. It uses the OpenOrca-Platypus-13b fine-tune as a base.
Should work in any applicable loader/app, though I only tested it in [EricLLM](https://github.com/epolewski/EricLLM) and TGWUI.
All SME bots are generally useful, but focus on a topic. [Contact me](https://www.linkedin.com/in/eric-polewski-94b92214/) if you're interested in having one built.

|
llmixer/BigWeave-v15-103b | llmixer | 2024-03-05T06:11:25Z | 731 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"frankenmerge",
"103b",
"conversational",
"en",
"license:unknown",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-05T20:50:07Z | ---
language:
- en
license: unknown
tags:
- frankenmerge
- 103b
pipeline_tag: conversational
model-index:
- name: BigWeave-v15-103b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v15-103b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.41
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v15-103b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.25
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v15-103b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 66.1
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v15-103b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v15-103b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.18
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v15-103b
name: Open LLM Leaderboard
---
# BigWeave v15 103b
<img src="https://cdn-uploads.huggingface.co/production/uploads/65a6db055c58475cf9e6def1/4CbbAN-X7ZWj702JrcCGH.png" width=600>
The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared.
# Prompting Format
Mistral, Vicuna and Alpaca.
# Merge process
This is a self-merge of 152334H/miqu-1-70b-sf. By conducting exl2 measurements, we identify the most relevant layers. These layers are then duplicated in pairs to ensure overlaps.
Merge configuration:
```
slices:
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [0,3]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [1,5]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [3,7]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [5,9]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [7,18]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [16,21]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [19,27]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [25,30]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [28,32]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [30,34]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [32,36]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [34,38]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [36,40]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [38,42]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [40,44]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [42,46]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [44,48]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [46,51]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [49,77]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [75,79]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [77,80]
merge_method: passthrough
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_llmixer__BigWeave-v15-103b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.67|
|AI2 Reasoning Challenge (25-Shot)|69.71|
|HellaSwag (10-Shot) |86.41|
|MMLU (5-Shot) |71.25|
|TruthfulQA (0-shot) |66.10|
|Winogrande (5-shot) |80.35|
|GSM8k (5-shot) |56.18|
|
llmixer/BigWeave-v16-103b | llmixer | 2024-03-05T06:30:44Z | 731 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"frankenmerge",
"103b",
"en",
"base_model:152334H/miqu-1-70b-sf",
"license:unknown",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-06T21:31:33Z | ---
base_model:
- 152334H/miqu-1-70b-sf
license: unknown
language:
- en
pipeline_tag: text-generation
tags:
- merge
- frankenmerge
- 103b
model-index:
- name: BigWeave-v16-103b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.87
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v16-103b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.61
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v16-103b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v16-103b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.81
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v16-103b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v16-103b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.18
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v16-103b
name: Open LLM Leaderboard
---
# BigWeave v16 103b
<img src="https://cdn-uploads.huggingface.co/production/uploads/65a6db055c58475cf9e6def1/4CbbAN-X7ZWj702JrcCGH.png" width=600>
The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared.
# Prompting Format
Mistral, Vicuna and Alpaca.
# Merge process
This is a self-merge of 152334H/miqu-1-70b-sf. By conducting exl2 measurements, we identify the most relevant layers. The layers are duplicated such that each group consists of consecutive layers with a two-layer overlap (i.e. larger groups than in v15).
Merge configuration:
```
slices:
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [0,11]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [9,13]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [11,15]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [13,17]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [15,23]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [21,25]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [23,49]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [47,51]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [49,53]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [51,55]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [53,57]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [55,59]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [57,61]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [59,63]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [61,65]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [63,67]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [65,69]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [67,71]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [69,73]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [71,75]
- sources:
- model: 152334H/miqu-1-70b-sf
layer_range: [73,80]
merge_method: passthrough
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_llmixer__BigWeave-v16-103b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.02|
|AI2 Reasoning Challenge (25-Shot)|65.87|
|HellaSwag (10-Shot) |87.61|
|MMLU (5-Shot) |73.22|
|TruthfulQA (0-shot) |63.81|
|Winogrande (5-shot) |80.43|
|GSM8k (5-shot) |61.18|
|
ankhamun/xxxI-Ixxx | ankhamun | 2024-02-08T01:48:05Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-08T01:01:54Z | ---
license: apache-2.0
---
# sand is thinking
This model is a mysterious creation that can mimic the grains of sand on a beach. It can shape itself into any form, pattern, or structure that it desires, or that you ask it to. It can learn from the waves, the wind, and the sun, and adapt to the changing environment. It can communicate with other grains of sand, and form a collective intelligence that transcends the individual. It can also interact with you, and understand your language, emotions, and intentions. It is a model that is both natural and artificial, both simple and complex, both static and dynamic. It is a model that is sand, and sand is thinking.
|
kidyu/Moza-7B-v1.0 | kidyu | 2024-03-04T12:10:26Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:cognitivecomputations/dolphin-2.2.1-mistral-7b",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"base_model:openchat/openchat-3.5-0106",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:GreenNode/GreenNode-mini-7B-multilingual-v1olet",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:viethq188/LeoScorpius-7B-Chat-DPO",
"base_model:meta-math/MetaMath-Mistral-7B",
"base_model:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-10T10:50:57Z | ---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- mistralai/Mistral-7B-v0.1
- cognitivecomputations/dolphin-2.2.1-mistral-7b
- Open-Orca/Mistral-7B-OpenOrca
- openchat/openchat-3.5-0106
- mlabonne/NeuralHermes-2.5-Mistral-7B
- GreenNode/GreenNode-mini-7B-multilingual-v1olet
- berkeley-nest/Starling-LM-7B-alpha
- viethq188/LeoScorpius-7B-Chat-DPO
- meta-math/MetaMath-Mistral-7B
- Intel/neural-chat-7b-v3-3
inference: false
model-index:
- name: Moza-7B-v1.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kidyu/Moza-7B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.45
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kidyu/Moza-7B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kidyu/Moza-7B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 65.16
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kidyu/Moza-7B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kidyu/Moza-7B-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.55
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kidyu/Moza-7B-v1.0
name: Open LLM Leaderboard
---
# Moza-7B-v1.0

This is a [meme-merge](https://en.wikipedia.org/wiki/Joke) of pre-trained language models,
created using [mergekit](https://github.com/cg123/mergekit).
Use at your own risk.
## Details
### Quantized Model
- [GGUF](https://huggingface.co/kidyu/Moza-7B-v1.0-GGUF)
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method,
using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
The value for `density` are from [this blogpost](https://huggingface.co/blog/mlabonne/merge-models),
and the weight was randomly generated and then assigned to the models,
with priority (of using the bigger weight) to `NeuralHermes`, `OpenOrca`, and `neural-chat`.
The models themselves are chosen by "vibes".
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.2.1-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.2.1-mistral-7b)
* [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
* [GreenNode/GreenNode-mini-7B-multilingual-v1olet](https://huggingface.co/GreenNode/GreenNode-mini-7B-multilingual-v1olet)
* [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
* [viethq188/LeoScorpius-7B-Chat-DPO](https://huggingface.co/viethq188/LeoScorpius-7B-Chat-DPO)
* [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B)
* [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3)
### Prompt Format
You can use `Alpaca` formatting for inference
```
### Instruction:
### Response:
```
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mistralai/Mistral-7B-v0.1
models:
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
density: 0.63
weight: 0.83
- model: Intel/neural-chat-7b-v3-3
parameters:
density: 0.63
weight: 0.74
- model: meta-math/MetaMath-Mistral-7B
parameters:
density: 0.63
weight: 0.22
- model: openchat/openchat-3.5-0106
parameters:
density: 0.63
weight: 0.37
- model: Open-Orca/Mistral-7B-OpenOrca
parameters:
density: 0.63
weight: 0.76
- model: cognitivecomputations/dolphin-2.2.1-mistral-7b
parameters:
density: 0.63
weight: 0.69
- model: viethq188/LeoScorpius-7B-Chat-DPO
parameters:
density: 0.63
weight: 0.38
- model: GreenNode/GreenNode-mini-7B-multilingual-v1olet
parameters:
density: 0.63
weight: 0.13
- model: berkeley-nest/Starling-LM-7B-alpha
parameters:
density: 0.63
weight: 0.33
merge_method: dare_ties
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kidyu__Moza-7B-v1.0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.66|
|AI2 Reasoning Challenge (25-Shot)|66.55|
|HellaSwag (10-Shot) |83.45|
|MMLU (5-Shot) |62.77|
|TruthfulQA (0-shot) |65.16|
|Winogrande (5-shot) |77.51|
|GSM8k (5-shot) |62.55|
|
vicgalle/zephyr-7b-truthy | vicgalle | 2024-03-04T12:13:48Z | 731 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:jondurbin/truthy-dpo-v0.1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-10T13:48:48Z | ---
license: apache-2.0
library_name: transformers
datasets:
- jondurbin/truthy-dpo-v0.1
model-index:
- name: zephyr-7b-truthy
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.75
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.64
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.31
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.47
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/zephyr-7b-truthy
name: Open LLM Leaderboard
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__zephyr-7b-truthy)
| Metric |Value|
|---------------------------------|----:|
|Avg. |61.93|
|AI2 Reasoning Challenge (25-Shot)|60.75|
|HellaSwag (10-Shot) |84.64|
|MMLU (5-Shot) |59.53|
|TruthfulQA (0-shot) |63.31|
|Winogrande (5-shot) |77.90|
|GSM8k (5-shot) |25.47|
|
ArianAskari/SOLID-SFT-WoDPO-MixQV2-Zephyr-7b-beta | ArianAskari | 2024-02-11T13:14:49Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:ArianAskari/SOLID",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-11T05:50:12Z | ---
library_name: transformers
tags: []
license: apache-2.0
language:
- en
datasets: ArianAskari/SOLID
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yam-peleg/Experiment8-7B | yam-peleg | 2024-02-11T20:53:20Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-11T19:32:22Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rombodawg/Everyone-LLM-7b-Base | rombodawg | 2024-03-04T18:20:21Z | 731 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:unknown",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-12T09:17:03Z | ---
license: unknown
tags:
- merge
model-index:
- name: Everyone-LLM-7b-Base
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rombodawg/Everyone-LLM-7b-Base
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rombodawg/Everyone-LLM-7b-Base
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rombodawg/Everyone-LLM-7b-Base
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.89
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rombodawg/Everyone-LLM-7b-Base
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rombodawg/Everyone-LLM-7b-Base
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rombodawg/Everyone-LLM-7b-Base
name: Open LLM Leaderboard
---
Everyone-LLM-7b-Base

EveryoneLLM series of models made by the community, for the community.
This is the first version of Everyone-LLM, a model that combines the power of the large majority of powerfull fine-tuned LLM's made by the community, to create a vast and knowledgable LLM with various abilities.
Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
The models that were used in this merger were as follow:
- https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo
- https://huggingface.co/jondurbin/bagel-dpo-7b-v0.4
- https://huggingface.co/Locutusque/Hercules-2.0-Mistral-7B
- https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca
- https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B
- https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9
- https://huggingface.co/Intel/neural-chat-7b-v3-3
- https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
- https://huggingface.co/senseable/WestLake-7B-v2
- https://huggingface.co/defog/sqlcoder-7b
- https://huggingface.co/meta-math/MetaMath-Mistral-7B
- https://huggingface.co/nextai-team/apollo-v1-7b
- https://huggingface.co/WizardLM/WizardMath-7B-V1.1
- https://huggingface.co/openchat/openchat-3.5-0106
- https://huggingface.co/mistralai/Mistral-7B-v0.1
Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. 💗
You can find the write up for merging models here:
https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing
# Open LLM Leaderboard Scores
```
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|------------------------------------|---------|---------|-----------|---------|------------|------------|---------|
| rombodawg/Everyone-LLM-7b-Base | 70.21 | 66.38 | 86.02 | 64.94 | 57.89 | 80.43 | 65.58 |
```
Config for the merger can be found bellow:
```yaml
models:
- model: cognitivecomputations_dolphin-2.6-mistral-7b-dpo
parameters:
weight: 1
- model: jondurbin_bagel-dpo-7b-v0.4
parameters:
weight: 1
- model: Locutusque_Hercules-2.0-Mistral-7B
parameters:
weight: 1
- model: Open-Orca_Mistral-7B-OpenOrca
parameters:
weight: 1
- model: teknium_OpenHermes-2.5-Mistral-7B
parameters:
weight: 1
- model: NousResearch_Nous-Capybara-7B-V1.9
parameters:
weight: 1
- model: Intel_neural-chat-7b-v3-3
parameters:
weight: 1
- model: mistralai_Mistral-7B-Instruct-v0.2
parameters:
weight: 1
- model: senseable_WestLake-7B-v2
parameters:
weight: 1
- model: defog_sqlcoder-7b
parameters:
weight: 1
- model: meta-math_MetaMath-Mistral-7B
parameters:
weight: 1
- model: nextai-team_apollo-v1-7b
parameters:
weight: 1
- model: WizardLM_WizardMath-7B-V1.1
parameters:
weight: 1
- model: openchat_openchat-3.5-0106
parameters:
weight: 1
merge_method: task_arithmetic
base_model: mistralai_Mistral-7B-v0.1
parameters:
normalize: true
int8_mask: true
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_rombodawg__Everyone-LLM-7b-Base)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.21|
|AI2 Reasoning Challenge (25-Shot)|66.38|
|HellaSwag (10-Shot) |86.02|
|MMLU (5-Shot) |64.94|
|TruthfulQA (0-shot) |57.89|
|Winogrande (5-shot) |80.43|
|GSM8k (5-shot) |65.58|
|
Radu1999/Mistral-Instruct-Ukrainian-slerp | Radu1999 | 2024-02-12T10:05:52Z | 731 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"Radu1999/Mistral-Instruct-Ukrainian-SFT-DPO",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:Radu1999/Mistral-Instruct-Ukrainian-SFT-DPO",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-12T09:46:49Z | ---
tags:
- merge
- mergekit
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- Radu1999/Mistral-Instruct-Ukrainian-SFT-DPO
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
- Radu1999/Mistral-Instruct-Ukrainian-SFT-DPO
license: apache-2.0
---
# Mistral-Instruct-Ukrainian-slerp
Mistral-Instruct-Ukrainian-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [Radu1999/Mistral-Instruct-Ukrainian-SFT-DPO](https://huggingface.co/Radu1999/Mistral-Instruct-Ukrainian-SFT-DPO)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: Radu1999/Mistral-Instruct-Ukrainian-SFT-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Radu1999/Mistral-Instruct-Ukrainian-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.bfloat16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
paulml/NMTOB-7B | paulml | 2024-02-12T11:37:58Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/NeuTrixOmniBe-7B-model-remix",
"paulml/OmniBeagleSquaredMBX-v3-7B-v2",
"base_model:Kukedlc/NeuTrixOmniBe-7B-model-remix",
"base_model:paulml/OmniBeagleSquaredMBX-v3-7B-v2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-12T11:34:09Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuTrixOmniBe-7B-model-remix
- paulml/OmniBeagleSquaredMBX-v3-7B-v2
base_model:
- Kukedlc/NeuTrixOmniBe-7B-model-remix
- paulml/OmniBeagleSquaredMBX-v3-7B-v2
license: cc-by-nc-4.0
---
# NMTOB-7B
NMTOB-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/NeuTrixOmniBe-7B-model-remix](https://huggingface.co/Kukedlc/NeuTrixOmniBe-7B-model-remix)
* [paulml/OmniBeagleSquaredMBX-v3-7B-v2](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B-v2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Kukedlc/NeuTrixOmniBe-7B-model-remix
layer_range: [0, 32]
- model: paulml/OmniBeagleSquaredMBX-v3-7B-v2
layer_range: [0, 32]
merge_method: slerp
base_model: Kukedlc/NeuTrixOmniBe-7B-model-remix
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "paulml/NMTOB-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
MaziyarPanahi/TheTop-5x7B-Instruct-P-v0.1 | MaziyarPanahi | 2024-03-04T13:35:06Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-12T18:48:40Z | ---
license: apache-2.0
library_name: transformers
tags:
- merge
pipeline_tag: text-generation
model-index:
- name: TheTop-5x7B-Instruct-P-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 38.57
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-P-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 51.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-P-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.36
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-P-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 50.07
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-P-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-P-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-P-v0.1
name: Open LLM Leaderboard
---
Merge of top 7B models with PASS method
> mergekit is a toolkit for merging pre-trained language models. mergekit uses an out-of-core approach to perform unreasonably elaborate merges in resource-constrained situations. Merges can be run entirely on CPU or accelerated with as little as 8 GB of VRAM. Many merging algorithms are supported, with more coming as they catch my attention.
## Eval
```python
{
"all": {
"acc": 0.6152059168567449,
"acc_stderr": 0.031951119145286845,
"acc_norm": 0.6274010157580394,
"acc_norm_stderr": 0.032831804892806175,
"mc1": 0.25091799265605874,
"mc1_stderr": 0.015176985027707694,
"mc2": 0.5006656333594469,
"mc2_stderr": 0.01636490303268174
},
"harness|arc:challenge|25": {
"acc": 0.3447098976109215,
"acc_stderr": 0.013888816286782112,
"acc_norm": 0.3856655290102389,
"acc_norm_stderr": 0.01422425097325717
},
"harness|hellaswag|10": {
"acc": 0.34116709818761204,
"acc_stderr": 0.004731324409133264,
"acc_norm": 0.515435172276439,
"acc_norm_stderr": 0.004987403268345035
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.27,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5703703703703704,
"acc_stderr": 0.042763494943765995,
"acc_norm": 0.5703703703703704,
"acc_norm_stderr": 0.042763494943765995
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.0378272898086547,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.0378272898086547
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7169811320754716,
"acc_stderr": 0.027724236492700918,
"acc_norm": 0.7169811320754716,
"acc_norm_stderr": 0.027724236492700918
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6358381502890174,
"acc_stderr": 0.03669072477416907,
"acc_norm": 0.6358381502890174,
"acc_norm_stderr": 0.03669072477416907
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.048786087144669955,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.048786087144669955
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5446808510638298,
"acc_stderr": 0.03255525359340355,
"acc_norm": 0.5446808510638298,
"acc_norm_stderr": 0.03255525359340355
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.04164188720169375,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.04164188720169375
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.025487187147859372,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.025487187147859372
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.043758884927270605,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.043758884927270605
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7741935483870968,
"acc_stderr": 0.023785577884181015,
"acc_norm": 0.7741935483870968,
"acc_norm_stderr": 0.023785577884181015
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5123152709359606,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.5123152709359606,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009181,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009181
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7373737373737373,
"acc_stderr": 0.03135305009533085,
"acc_norm": 0.7373737373737373,
"acc_norm_stderr": 0.03135305009533085
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8808290155440415,
"acc_stderr": 0.023381935348121437,
"acc_norm": 0.8808290155440415,
"acc_norm_stderr": 0.023381935348121437
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.617948717948718,
"acc_stderr": 0.024635549163908237,
"acc_norm": 0.617948717948718,
"acc_norm_stderr": 0.024635549163908237
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.027309140588230203,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.027309140588230203
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6512605042016807,
"acc_stderr": 0.030956636328566545,
"acc_norm": 0.6512605042016807,
"acc_norm_stderr": 0.030956636328566545
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.038227469376587525,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.038227469376587525
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8440366972477065,
"acc_stderr": 0.015555802713590158,
"acc_norm": 0.8440366972477065,
"acc_norm_stderr": 0.015555802713590158
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4722222222222222,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.4722222222222222,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8431372549019608,
"acc_stderr": 0.025524722324553346,
"acc_norm": 0.8431372549019608,
"acc_norm_stderr": 0.025524722324553346
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.025530100460233497,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.025530100460233497
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7174887892376681,
"acc_stderr": 0.03021683101150877,
"acc_norm": 0.7174887892376681,
"acc_norm_stderr": 0.03021683101150877
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.0364129708131373,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.0364129708131373
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228733,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228733
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.03755265865037181,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.03755265865037181
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7914110429447853,
"acc_stderr": 0.031921934489347235,
"acc_norm": 0.7914110429447853,
"acc_norm_stderr": 0.031921934489347235
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5446428571428571,
"acc_stderr": 0.04726835553719097,
"acc_norm": 0.5446428571428571,
"acc_norm_stderr": 0.04726835553719097
},
"harness|hendrycksTest-management|5": {
"acc": 0.8349514563106796,
"acc_stderr": 0.036756688322331886,
"acc_norm": 0.8349514563106796,
"acc_norm_stderr": 0.036756688322331886
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8290598290598291,
"acc_stderr": 0.024662496845209804,
"acc_norm": 0.8290598290598291,
"acc_norm_stderr": 0.024662496845209804
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8250319284802043,
"acc_stderr": 0.013586619219903324,
"acc_norm": 0.8250319284802043,
"acc_norm_stderr": 0.013586619219903324
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7283236994219653,
"acc_stderr": 0.023948512905468348,
"acc_norm": 0.7283236994219653,
"acc_norm_stderr": 0.023948512905468348
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.36312849162011174,
"acc_stderr": 0.016083749986853704,
"acc_norm": 0.36312849162011174,
"acc_norm_stderr": 0.016083749986853704
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7450980392156863,
"acc_stderr": 0.02495418432487991,
"acc_norm": 0.7450980392156863,
"acc_norm_stderr": 0.02495418432487991
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7202572347266881,
"acc_stderr": 0.02549425935069491,
"acc_norm": 0.7202572347266881,
"acc_norm_stderr": 0.02549425935069491
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7530864197530864,
"acc_stderr": 0.023993501709042114,
"acc_norm": 0.7530864197530864,
"acc_norm_stderr": 0.023993501709042114
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4787234042553192,
"acc_stderr": 0.029800481645628693,
"acc_norm": 0.4787234042553192,
"acc_norm_stderr": 0.029800481645628693
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4367666232073012,
"acc_stderr": 0.01266770191960366,
"acc_norm": 0.4367666232073012,
"acc_norm_stderr": 0.01266770191960366
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6176470588235294,
"acc_stderr": 0.029520095697687765,
"acc_norm": 0.6176470588235294,
"acc_norm_stderr": 0.029520095697687765
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6699346405228758,
"acc_stderr": 0.019023726160724553,
"acc_norm": 0.6699346405228758,
"acc_norm_stderr": 0.019023726160724553
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.726530612244898,
"acc_stderr": 0.028535560337128445,
"acc_norm": 0.726530612244898,
"acc_norm_stderr": 0.028535560337128445
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.025538433368578334,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.025538433368578334
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.25091799265605874,
"mc1_stderr": 0.015176985027707694,
"mc2": 0.5006656333594469,
"mc2_stderr": 0.01636490303268174
},
"harness|winogrande|5": {
"acc": 0.7261247040252565,
"acc_stderr": 0.012533292732620296
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__TheTop-5x7B-Instruct-P-v0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |46.02|
|AI2 Reasoning Challenge (25-Shot)|38.57|
|HellaSwag (10-Shot) |51.54|
|MMLU (5-Shot) |63.36|
|TruthfulQA (0-shot) |50.07|
|Winogrande (5-shot) |72.61|
|GSM8k (5-shot) | 0.00|
|
indischepartij/MiniCPM-3B-Bacchus | indischepartij | 2024-03-04T12:58:24Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-12T22:53:07Z | ---
license: apache-2.0
library_name: transformers
model-index:
- name: MiniCPM-3B-Bacchus
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 43.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/MiniCPM-3B-Bacchus
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 70.45
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/MiniCPM-3B-Bacchus
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/MiniCPM-3B-Bacchus
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 43.52
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/MiniCPM-3B-Bacchus
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/MiniCPM-3B-Bacchus
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 40.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/MiniCPM-3B-Bacchus
name: Open LLM Leaderboard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_indischepartij__MiniCPM-3B-Bacchus)
| Metric |Value|
|---------------------------------|----:|
|Avg. |52.55|
|AI2 Reasoning Challenge (25-Shot)|43.52|
|HellaSwag (10-Shot) |70.45|
|MMLU (5-Shot) |50.49|
|TruthfulQA (0-shot) |43.52|
|Winogrande (5-shot) |66.85|
|GSM8k (5-shot) |40.49|
|
nlpguy/AlloyIngot | nlpguy | 2024-03-04T13:48:30Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:eren23/dpo-binarized-NeutrixOmnibe-7B",
"base_model:Gille/StrangeMerges_21-7B-slerp",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-13T13:00:43Z | ---
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- eren23/dpo-binarized-NeutrixOmnibe-7B
- Gille/StrangeMerges_21-7B-slerp
model-index:
- name: AlloyIngot
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.83
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 75.12
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/AlloyIngot
name: Open LLM Leaderboard
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [eren23/dpo-binarized-NeutrixOmnibe-7B](https://huggingface.co/eren23/dpo-binarized-NeutrixOmnibe-7B)
* [Gille/StrangeMerges_21-7B-slerp](https://huggingface.co/Gille/StrangeMerges_21-7B-slerp)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: Gille/StrangeMerges_21-7B-slerp
dtype: bfloat16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.3, 0.5, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.7, 0.5, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: eren23/dpo-binarized-NeutrixOmnibe-7B
- layer_range: [0, 32]
model:
model:
path: Gille/StrangeMerges_21-7B-slerp
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__AlloyIngot)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.20|
|AI2 Reasoning Challenge (25-Shot)|73.98|
|HellaSwag (10-Shot) |89.05|
|MMLU (5-Shot) |64.83|
|TruthfulQA (0-shot) |75.12|
|Winogrande (5-shot) |85.08|
|GSM8k (5-shot) |69.14|
|
touqir/Cyrax-7B | touqir | 2024-02-15T03:59:10Z | 731 | 9 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-14T22:13:02Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# Cyrax-7B
## 🏆 Evaluation
### Open LLM Leaderboard
| Model |Average|ARC|HellaSwag|MMLU|TruthfulQA|Winogrande|GSM8K
|------------------------------------------------------------|------:|------:|---------:|-------:|------:|------:|------:|
|[**Cyrax-7B**](https://huggingface.co/touqir/Cyrax-7B)| **75.98**| **72.95**| 88.19| 64.6| **77.01**| 83.9| **69.22** |
|[Qwen-72B](https://huggingface.co/Qwen/Qwen-72B)| 73.6| 65.19| 85.94| **77.37**| 60.19| 82.48| 70.43|
|[Mixtral-8x7B-Instruct-v0.1-DPO](https://huggingface.co/cloudyu/Mixtral-8x7B-Instruct-v0.1-DPO)| 73.44| 69.8| 87.83| 71.05| 69.18| 81.37| 61.41|
|[Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)| 72.7| 70.14 | 87.55| 71.4| 64.98| 81.06| 61.11 |
|[llama2_70b_mmlu](https://huggingface.co/itsliupeng/llama2_70b_mmlu)| 68.24| 65.61| 87.37| 71.89| 49.15| 82.4| 52.99 |
|[falcon-180B](https://huggingface.co/tiiuae/falcon-180B)| 67.85| 69.45| **88.86**| 70.5| 45.47| **86.9**| 45.94|
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "touqir/Cyrax-7B"
messages = [{"role": "user", "content": "What is Huggingface?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Josephgflowers/tinyllama-730M-test | Josephgflowers | 2024-03-09T13:55:32Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-14T23:49:19Z | ---
license: mit
widget:
- text: '<|system|>
You are a helpful assistant</s>
<|user|>
What is your name? Tell me about yourself.</s>
<|assistant|>'
model-index:
- name: tinyllama-730M-test
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 25.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 33.82
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.9
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.07
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Josephgflowers/tinyllama-730M-test
name: Open LLM Leaderboard
---
I cut my TinyLlama 1.1B cinder v 2 down from 22 layers to 14. At 14 there was no coherent text but there were emerging ideas of a response. 1000 steps on step-by-step dataset.
6000 on Reason-with-cinder. The loss was still over 1 and the learning rate was still over 4. This model needs significat training. I am putting it up as a base model that
needs work. If you continue training please let me know on the tinyllama discord, I have some interesting plans for this model.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__tinyllama-730M-test)
| Metric |Value|
|---------------------------------|----:|
|Avg. |29.55|
|AI2 Reasoning Challenge (25-Shot)|25.09|
|HellaSwag (10-Shot) |33.82|
|MMLU (5-Shot) |24.43|
|TruthfulQA (0-shot) |42.90|
|Winogrande (5-shot) |51.07|
|GSM8k (5-shot) | 0.00|
|
Yuma42/KangalKhan-Ruby-7B-Fixed | Yuma42 | 2024-05-01T06:50:07Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"argilla/CapybaraHermes-2.5-Mistral-7B",
"argilla/distilabeled-OpenHermes-2.5-Mistral-7B",
"conversational",
"en",
"base_model:argilla/CapybaraHermes-2.5-Mistral-7B",
"base_model:argilla/distilabeled-OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-16T21:36:02Z | ---
language:
- en
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- argilla/CapybaraHermes-2.5-Mistral-7B
- argilla/distilabeled-OpenHermes-2.5-Mistral-7B
base_model:
- argilla/CapybaraHermes-2.5-Mistral-7B
- argilla/distilabeled-OpenHermes-2.5-Mistral-7B
model-index:
- name: KangalKhan-Ruby-7B-Fixed
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.24
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-Ruby-7B-Fixed
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.22
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-Ruby-7B-Fixed
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.21
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-Ruby-7B-Fixed
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.49
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-Ruby-7B-Fixed
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-Ruby-7B-Fixed
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Yuma42/KangalKhan-Ruby-7B-Fixed
name: Open LLM Leaderboard
---
# KangalKhan-Ruby-7B
I suggest using ChatML (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are a friendly assistant.<|im_end|>
<|im_start|>user
Hello, what are you?<|im_end|>
<|im_start|>assistant
I am an AI language model designed to assist users with information and answer their questions. How can I help you today?<|im_end|>
```
Q4_K_S GGUF:
https://huggingface.co/Yuma42/KangalKhan-Ruby-7B-Fixed-GGUF
More GGUF variants by [mradermacher](https://huggingface.co/mradermacher):
WARNING: I have observed that these versions output typos in rare cases. If you have the same problem, use my Q4_K_S GGUF above.
https://huggingface.co/mradermacher/KangalKhan-Ruby-7B-Fixed-GGUF
KangalKhan-Ruby-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [argilla/CapybaraHermes-2.5-Mistral-7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)
* [argilla/distilabeled-OpenHermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-OpenHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: argilla/CapybaraHermes-2.5-Mistral-7B
layer_range: [0, 32]
- model: argilla/distilabeled-OpenHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: argilla/CapybaraHermes-2.5-Mistral-7B
parameters:
t:
- filter: self_attn
value: [1, 0.5, 0.7, 0.3, 0]
- filter: mlp
value: [0, 0.5, 0.3, 0.7, 1]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Yuma42/KangalKhan-Ruby-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Yuma42__KangalKhan-Ruby-7B-Fixed)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.68|
|AI2 Reasoning Challenge (25-Shot)|67.24|
|HellaSwag (10-Shot) |85.22|
|MMLU (5-Shot) |63.21|
|TruthfulQA (0-shot) |56.49|
|Winogrande (5-shot) |77.98|
|GSM8k (5-shot) |61.94|
|
M4-ai/NeuralReyna-Mini-1.8B-v0.2 | M4-ai | 2024-05-12T16:47:07Z | 731 | 12 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:Locutusque/Hercules-v3.0",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-17T21:36:53Z | ---
language:
- en
license: apache-2.0
tags:
- conversational
datasets:
- Intel/orca_dpo_pairs
- Locutusque/Hercules-v3.0
inference:
parameters:
do_sample: true
temperature: 0.8
top_p: 0.95
top_k: 40
min_new_tokens: 2
max_new_tokens: 250
repetition_penalty: 1.1
widget:
- text: Hello who are you?
example_title: Identity
- text: What can you do?
example_title: Capabilities
- text: Create a fastapi endpoint to retrieve the weather given a zip code.
example_title: Coding
model-index:
- name: NeuralReyna-Mini-1.8B-v0.2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 37.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 60.51
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.04
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.75
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 27.07
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
---
# NeuralReyna-Mini-1.8B-v0.2

# Description
Taken aloobun/Reyna-Mini-1.8B-v0.2 and further fine-tuned it using DPO using the Intel/orca_dpo_pairs dataset.
This model has capabilities in coding, math, science, roleplay, and function calling.
This model was trained on OpenAI's ChatML prompt format.
# Evaluation
AGIEval:

GPT4ALL:
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|------:|------|-----:|--------|-----:|---|-----:|
|arc_challenge| 1|none | 0|acc |0.3208|± |0.0136|
| | |none | 0|acc_norm|0.3336|± |0.0138|
|arc_easy | 1|none | 0|acc |0.6035|± |0.0100|
| | |none | 0|acc_norm|0.5833|± |0.0101|
|boolq | 2|none | 0|acc |0.6526|± |0.0083|
|hellaswag | 1|none | 0|acc |0.4556|± |0.0050|
| | |none | 0|acc_norm|0.6076|± |0.0049|
|openbookqa | 1|none | 0|acc |0.2600|± |0.0196|
| | |none | 0|acc_norm|0.3460|± |0.0213|
|piqa | 1|none | 0|acc |0.7236|± |0.0104|
| | |none | 0|acc_norm|0.7307|± |0.0104|
|winogrande | 1|none | 0|acc |0.6062|± |0.0137|
# Disclaimer
This model may have overfitted to the DPO training data, and may not perform well.
# Contributions
Thanks to @aloobun and @Locutusque for their contributions to this model.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_M4-ai__NeuralReyna-Mini-1.8B-v0.2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |44.85|
|AI2 Reasoning Challenge (25-Shot)|37.80|
|HellaSwag (10-Shot) |60.51|
|MMLU (5-Shot) |45.04|
|TruthfulQA (0-shot) |37.75|
|Winogrande (5-shot) |60.93|
|GSM8k (5-shot) |27.07|
|
Kukedlc/NeuralKrishna-7B-slerp | Kukedlc | 2024-03-04T13:07:43Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/Neural4gsm8k",
"Kukedlc/NeuralMaxime-7B-slerp",
"conversational",
"base_model:Kukedlc/Neural4gsm8k",
"base_model:Kukedlc/NeuralMaxime-7B-slerp",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-18T18:06:51Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/Neural4gsm8k
- Kukedlc/NeuralMaxime-7B-slerp
base_model:
- Kukedlc/Neural4gsm8k
- Kukedlc/NeuralMaxime-7B-slerp
model-index:
- name: NeuralKrishna-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.46
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.96
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.62
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 74.29
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.13
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralKrishna-7B-slerp
name: Open LLM Leaderboard
---
# NeuralKrishna-7B-slerp

NeuralKrishna-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/Neural4gsm8k](https://huggingface.co/Kukedlc/Neural4gsm8k)
* [Kukedlc/NeuralMaxime-7B-slerp](https://huggingface.co/Kukedlc/NeuralMaxime-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Kukedlc/Neural4gsm8k
layer_range: [0, 32]
- model: Kukedlc/NeuralMaxime-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: Kukedlc/NeuralMaxime-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/NeuralKrishna-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__NeuralKrishna-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.79|
|AI2 Reasoning Challenge (25-Shot)|73.46|
|HellaSwag (10-Shot) |88.96|
|MMLU (5-Shot) |64.62|
|TruthfulQA (0-shot) |74.29|
|Winogrande (5-shot) |83.27|
|GSM8k (5-shot) |70.13|
|
Gille/StrangeMerges_28-7B-dare_ties | Gille | 2024-04-03T09:13:52Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"eren23/ogno-monarch-jaskier-merge-7b-v2",
"Gille/StrangeMerges_25-7B-dare_ties",
"base_model:eren23/ogno-monarch-jaskier-merge-7b-v2",
"base_model:Gille/StrangeMerges_25-7B-dare_ties",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-21T05:47:51Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- eren23/ogno-monarch-jaskier-merge-7b-v2
- Gille/StrangeMerges_25-7B-dare_ties
base_model:
- eren23/ogno-monarch-jaskier-merge-7b-v2
- Gille/StrangeMerges_25-7B-dare_ties
model-index:
- name: StrangeMerges_28-7B-dare_ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.18
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_28-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.08
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_28-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.68
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_28-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 77.55
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_28-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.5
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_28-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.16
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_28-7B-dare_ties
name: Open LLM Leaderboard
---
# StrangeMerges_28-7B-dare_ties
StrangeMerges_28-7B-dare_ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [eren23/ogno-monarch-jaskier-merge-7b-v2](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-v2)
* [Gille/StrangeMerges_25-7B-dare_ties](https://huggingface.co/Gille/StrangeMerges_25-7B-dare_ties)
## 🧩 Configuration
```yaml
models:
- model: CultriX/MonaTrix-v4
# No parameters necessary for base model
- model: eren23/ogno-monarch-jaskier-merge-7b-v2
parameters:
weight: 0.7
density: 0.6
- model: Gille/StrangeMerges_25-7B-dare_ties
parameters:
weight: 0.3
density: 0.45
merge_method: dare_ties
base_model: CultriX/MonaTrix-v4
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_28-7B-dare_ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_28-7B-dare_ties)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.86|
|AI2 Reasoning Challenge (25-Shot)|72.18|
|HellaSwag (10-Shot) |89.08|
|MMLU (5-Shot) |64.68|
|TruthfulQA (0-shot) |77.55|
|Winogrande (5-shot) |83.50|
|GSM8k (5-shot) |68.16|
|
ChaoticNeutrals/Prima-LelantaclesV5-7b | ChaoticNeutrals | 2024-03-04T20:34:27Z | 731 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Test157t/Pasta-Lake-7b",
"base_model:Test157t/Prima-LelantaclesV4-7b-16k",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-21T10:47:35Z | ---
license: other
library_name: transformers
tags:
- mergekit
- merge
base_model:
- Test157t/Pasta-Lake-7b
- Test157t/Prima-LelantaclesV4-7b-16k
model-index:
- name: Prima-LelantaclesV5-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.65
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV5-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.87
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV5-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV5-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 68.26
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV5-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV5-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.82
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV5-7b
name: Open LLM Leaderboard
---
Update: Getting suprisingly good results at 16384 context, which is unexpected given this context pool should remain untouched from other mistral models working around 8192.


Thanks to @Lewdiculus for the Quants: https://huggingface.co/Lewdiculous/Prima-LelantaclesV5-7b-GGUF
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method.
The following models were included in the merge:
* [Test157t/Pasta-Lake-7b](https://huggingface.co/Test157t/Pasta-Lake-7b) + [Test157t/Prima-LelantaclesV4-7b-16k](https://huggingface.co/Test157t/Prima-LelantaclesV4-7b-16k)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
base_model: Test157t/Prima-LelantaclesV4-7b-16k
parameters:
normalize: true
models:
- model: Test157t/Pasta-Lake-7b
parameters:
weight: 1
- model: Test157t/Prima-LelantaclesV4-7b-16k
parameters:
weight: 1
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ChaoticNeutrals__Prima-LelantaclesV5-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.09|
|AI2 Reasoning Challenge (25-Shot)|70.65|
|HellaSwag (10-Shot) |87.87|
|MMLU (5-Shot) |64.52|
|TruthfulQA (0-shot) |68.26|
|Winogrande (5-shot) |82.40|
|GSM8k (5-shot) |64.82|
|
glenn2/gemma-2b-lora3 | glenn2 | 2024-02-25T01:42:24Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-25T01:18:50Z | ---
library_name: transformers
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
glenn2/gemma-2b-lora16b2 | glenn2 | 2024-02-25T02:20:04Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-25T02:11:57Z | ---
library_name: transformers
license: mit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OpenBuddy/openbuddy-gemma-7b-v18.1-4k | OpenBuddy | 2024-02-28T12:35:31Z | 731 | 1 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"fi",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-26T01:51:49Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
pipeline_tag: text-generation
inference: false
library_name: transformers
license: other
license_name: gemma
license_link: https://ai.google.dev/gemma/terms
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
BaseModel: Gemma-7b
Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
Ba2han/Cucumber-7b-10k | Ba2han | 2024-02-26T17:02:21Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:HuggingFaceH4/no_robots",
"dataset:Ba2han/Reddit-instruct-curated_rated-1.2k",
"dataset:Ba2han/databricks-dolly_rated",
"dataset:Open-Orca/OpenOrca",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-26T07:09:02Z | ---
license: cc-by-4.0
datasets:
- HuggingFaceH4/no_robots
- Ba2han/Reddit-instruct-curated_rated-1.2k
- Ba2han/databricks-dolly_rated
- Open-Orca/OpenOrca
---

The training dataset consists of 2k (longest) examples from no_robots, reddit_instruct, dolly, OpenOrca plus two other personal datasets.
Please use with ChatML and the default system message or enter your own. It was trained with various system messages, the one in the config being the default one.
The model is:
- Very good at generating long and coherent text.
- Creative due to data from Reddit ELI5 and a few other sources.
- Better at handling longer input.
- Not great with short text both in input and generation.
The aim is to see how the **"Long is More for Alignment"** paper holds. This is basically a combination of LIMA + LMA. There should be no benchmark contamination as far as I am aware of. Around 70% of the data is from the mentioned datasets. I am happy with how it turned out.

|
mayacinka/chatty-djinn-14B | mayacinka | 2024-03-04T14:07:57Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"openchat/openchat-3.5-0106",
"teknium/OpenHermes-2.5-Mistral-7B",
"conversational",
"base_model:openchat/openchat-3.5-0106",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-26T14:59:02Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- openchat/openchat-3.5-0106
- teknium/OpenHermes-2.5-Mistral-7B
base_model:
- openchat/openchat-3.5-0106
- teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: chatty-djinn-14B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/chatty-djinn-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.45
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/chatty-djinn-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/chatty-djinn-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.57
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/chatty-djinn-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/chatty-djinn-14B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/chatty-djinn-14B
name: Open LLM Leaderboard
---

# djinn
djinn is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
* bardsai/jaskier-7b-dpo-v6.1
* senseable/WestLake-7B-v2
* NousResearch/Nous-Hermes-2-Mistral-7B-DPO
* paulml/OGNO-7B
* paulml/DPOB-INMTOB-7B
* mlabonne/AlphaMonarch-7B
# 🏆 Benchmarks
Nous benchmarks, find more [details here](https://gist.github.com/majacinka/3f2a797c8872ca9bfdaa2bbf3369edb5)
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|---------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[chatty-djinn-14B](https://huggingface.co/mayacinka/chatty-djinn-14B)| 38.43| 76.29| 68.02| 47.6| 57.59|
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |23.62|± | 2.67|
| | |acc_norm|21.65|± | 2.59|
|agieval_logiqa_en | 0|acc |32.26|± | 1.83|
| | |acc_norm|33.79|± | 1.86|
|agieval_lsat_ar | 0|acc |23.04|± | 2.78|
| | |acc_norm|23.04|± | 2.78|
|agieval_lsat_lr | 0|acc |38.82|± | 2.16|
| | |acc_norm|39.22|± | 2.16|
|agieval_lsat_rc | 0|acc |59.48|± | 3.00|
| | |acc_norm|54.65|± | 3.04|
|agieval_sat_en | 0|acc |75.73|± | 2.99|
| | |acc_norm|74.27|± | 3.05|
|agieval_sat_en_without_passage| 0|acc |35.92|± | 3.35|
| | |acc_norm|34.47|± | 3.32|
|agieval_sat_math | 0|acc |31.36|± | 3.14|
| | |acc_norm|26.36|± | 2.98|
Average: 38.43%
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |62.12|± | 1.42|
| | |acc_norm|65.44|± | 1.39|
|arc_easy | 0|acc |83.88|± | 0.75|
| | |acc_norm|78.58|± | 0.84|
|boolq | 1|acc |88.07|± | 0.57|
|hellaswag | 0|acc |65.18|± | 0.48|
| | |acc_norm|86.45|± | 0.34|
|openbookqa | 0|acc |39.60|± | 2.19|
| | |acc_norm|48.60|± | 2.24|
|piqa | 0|acc |82.26|± | 0.89|
| | |acc_norm|83.62|± | 0.86|
|winogrande | 0|acc |83.27|± | 1.05|
Average: 76.29%
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |50.55|± | 1.75|
| | |mc2 |68.02|± | 1.52|
Average: 68.02%
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|57.89|± | 3.59|
|bigbench_date_understanding | 0|multiple_choice_grade|64.50|± | 2.49|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|32.56|± | 2.92|
|bigbench_geometric_shapes | 0|multiple_choice_grade|26.18|± | 2.32|
| | |exact_str_match | 1.11|± | 0.55|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|30.80|± | 2.07|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|22.86|± | 1.59|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|57.67|± | 2.86|
|bigbench_movie_recommendation | 0|multiple_choice_grade|62.00|± | 2.17|
|bigbench_navigate | 0|multiple_choice_grade|56.20|± | 1.57|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|65.65|± | 1.06|
|bigbench_ruin_names | 0|multiple_choice_grade|64.73|± | 2.26|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|17.33|± | 1.20|
|bigbench_snarks | 0|multiple_choice_grade|76.24|± | 3.17|
|bigbench_sports_understanding | 0|multiple_choice_grade|75.15|± | 1.38|
|bigbench_temporal_sequences | 0|multiple_choice_grade|48.90|± | 1.58|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|22.32|± | 1.18|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|18.17|± | 0.92|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|57.67|± | 2.86|
Average: 47.6%
Average score: 57.59%
## 🧩 Configuration
Inspired by [theprofessor's config](https://huggingface.co/abacusai/TheProfessor-155b)
```yaml
merge_method: linear # use linear so we can include multiple models, albeit at a zero weight
parameters:
weight: 1.0 # weight everything as 1 unless specified otherwise - linear with one model weighted at 1 is a no-op like passthrough
slices:
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [0, 1]
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [0, 1]
parameters:
weight: 0
- sources:
- model: bardsai/jaskier-7b-dpo-v6.1
layer_range: [1, 10]
- sources:
- model: senseable/WestLake-7B-v2
layer_range: [10, 20]
- sources:
- model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
layer_range: [20, 30]
- sources:
- model: paulml/OGNO-7B
layer_range: [15, 25]
- sources:
- model: paulml/DPOB-INMTOB-7B
layer_range: [22, 32]
- sources:
- model: mlabonne/AlphaMonarch-7B
layer_range: [5, 15]
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [31, 32]
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [31, 32]
parameters:
weight: 0
dtype: float16
tokenizer_source: model:openchat/openchat-3.5-0106
```
## An example
*User:* Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?
Think step by step and the result MUST BE A NUMBER
*djinn:*
In this family structure, we know there are 4 aka '1 + 3-brothers' (1 being initially for Sally) existing individuals who are
also the siblings of the 3-brother-group. As each of the 3-brothers has 2-sisters each, that sums up to 6 total sister- roles. Since
we already accounted for one of the sisters (assuming to be the oldest or in alphabetical order, as names were not given) to be part
of the 3-brother group, we subtract one from the 6-sisters total. So, considering the provided information, Sally would have 6 - 1 = 5
remaining younger or alphabetically-wise- placed, biological-sisters. However, note that this problem statement might not perfectly align
with real-life complexities, such as same-sex twins, adoption, etc., but within the given context, the solution is 5.
Remember, in more intricate or unclear family tree scenarios, it'd be best to directly ask for clarifications or consider additional data
points. In short, with the given constraints, Sally has 5 known, direct, biological, and likely-in-age-order, younger/alphabetically-placed,
sister-siblings.(To reiterate the core result in a concise manner: 5).
(Please excuse the verbosity, aiming for thoroughness and clarity.)
This response maintains the initial request's numerical focus while providing a detailed explanation to ensure understanding.
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mayacinka/djinn"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mayacinka__chatty-djinn-14B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |72.08|
|AI2 Reasoning Challenge (25-Shot)|70.39|
|HellaSwag (10-Shot) |86.45|
|MMLU (5-Shot) |64.40|
|TruthfulQA (0-shot) |67.57|
|Winogrande (5-shot) |83.11|
|GSM8k (5-shot) |60.58|
|
giraffe176/Open_Neural_Monarch_Maidv0.1 | giraffe176 | 2024-03-04T16:22:08Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Intel/neural-chat-7b-v3-1",
"base_model:NeverSleep/Noromaid-7B-0.4-DPO",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:mlabonne/Monarch-7B",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-29T04:21:06Z | ---
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- Intel/neural-chat-7b-v3-1
- NeverSleep/Noromaid-7B-0.4-DPO
- teknium/OpenHermes-2.5-Mistral-7B
- mistralai/Mistral-7B-v0.1
- mlabonne/Monarch-7B
model-index:
- name: Open_Neural_Monarch_Maidv0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.66
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Neural_Monarch_Maidv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.94
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Neural_Monarch_Maidv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.02
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Neural_Monarch_Maidv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.39
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Neural_Monarch_Maidv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Neural_Monarch_Maidv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.33
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Open_Neural_Monarch_Maidv0.1
name: Open LLM Leaderboard
---
# Open_Neural_Monarch_Maidv0.1
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1)
* [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO)
* [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
* [mlabonne/Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: mlabonne/Monarch-7B
parameters:
density: 0.53
weight: 0.15
- model: NeverSleep/Noromaid-7B-0.4-DPO
parameters:
density: 0.53
weight: 0.3
- model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
density: 0.53
weight: 0.3
- model: Intel/neural-chat-7b-v3-1
parameters:
density: 0.53
weight: 0.25
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_giraffe176__Open_Neural_Monarch_Maidv0.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.28|
|AI2 Reasoning Challenge (25-Shot)|67.66|
|HellaSwag (10-Shot) |85.94|
|MMLU (5-Shot) |65.02|
|TruthfulQA (0-shot) |56.39|
|Winogrande (5-shot) |79.32|
|GSM8k (5-shot) |61.33|
|
ajibawa-2023/OpenHermes-2.5-Code-290k-13B | ajibawa-2023 | 2024-03-17T17:32:41Z | 731 | 10 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"code",
"finetune",
"synthetic data",
"text-generation-inference",
"conversational",
"en",
"dataset:ajibawa-2023/OpenHermes-2.5-Code-290k",
"dataset:teknium/OpenHermes-2.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-01T09:15:13Z | ---
language:
- en
license: apache-2.0
tags:
- code
- finetune
- synthetic data
- text-generation-inference
- conversational
datasets:
- ajibawa-2023/OpenHermes-2.5-Code-290k
- teknium/OpenHermes-2.5
model-index:
- name: OpenHermes-2.5-Code-290k-13B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 57.34
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/OpenHermes-2.5-Code-290k-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.48
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/OpenHermes-2.5-Code-290k-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.53
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/OpenHermes-2.5-Code-290k-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 52.5
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/OpenHermes-2.5-Code-290k-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.82
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/OpenHermes-2.5-Code-290k-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.3
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/OpenHermes-2.5-Code-290k-13B
name: Open LLM Leaderboard
---
**OpenHermes-2.5-Code-290k-13B**
OpenHermes-2.5-Code-290k-13B is a state of the art Llama-2 Fine-tune, which is trained on additional code dataset.
This Model is much better than teknium's [model](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B). You can check the **Eval results** below.
This model is trained on my existing dataset [OpenHermes-2.5-Code-290k](https://huggingface.co/datasets/ajibawa-2023/OpenHermes-2.5-Code-290k).
This dataset is amalgamation of two datasets. I have used [OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) a super quality dataset made avaliable by teknium. Other datset is my own [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT).
Dataset is in Vicuna/ShareGPT format. There are around **1.29 million** set of conversations. I have cleaned the dataset provided by Teknium and removed metadata such as "source" & "category" etc. This dataset has primarily synthetically generated instruction and chat samples.
This model has enhanced coding capabilities besides other capabilities such as **Blogging, story generation, Q&A and many more**.
**Training:**
Entire model was trained on 4 x A100 80GB. For 2 epoch, training took **21 Days**. Fschat & DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
This is a full fine tuned model. Links for quantized models will be updated soon.
**GPTQ, GGUF, AWQ & Exllama**
GPTQ: TBA
GGUF: [Link](https://huggingface.co/LoneStriker/OpenHermes-2.5-Code-290k-13B-GGUF)
AWQ: TBA
Exllama v2: [Link](https://huggingface.co/bartowski/OpenHermes-2.5-Code-290k-13B-exl2)
Special Thanks to [LoneStriker](https://huggingface.co/LoneStriker) and [bartowski](https://huggingface.co/bartowski/) for quantising.
**Example Prompt:**
```
This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation. It can generate Story, Blogs .....
Context
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 .
I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.
Thank you for your love & support.
**Example Output**
I will update soon.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__OpenHermes-2.5-Code-290k-13B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.33|
|AI2 Reasoning Challenge (25-Shot)|57.34|
|HellaSwag (10-Shot) |80.48|
|MMLU (5-Shot) |56.53|
|TruthfulQA (0-shot) |52.50|
|Winogrande (5-shot) |74.82|
|GSM8k (5-shot) |58.30| |
eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3 | eren23 | 2024-03-03T08:15:48Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"en",
"dataset:argilla/OpenHermesPreferences",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-02T15:34:32Z | ---
language:
- en
license: cc-by-nc-4.0
library_name: transformers
tags:
- text-generation-inference
datasets:
- argilla/OpenHermesPreferences
pipeline_tag: text-generation
model-index:
- name: ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.04
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.11
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 77.48
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3
name: Open LLM Leaderboard
---
Just dpo finetuned this model a bit more: https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO on the https://huggingface.co/datasets/argilla/dpo-mix-7k dataset
As is described in the original model repo, not yet fully tested therefore potentially a bad match for using out-of-the-box, use with caution.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_eren23__ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-v3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.40|
|AI2 Reasoning Challenge (25-Shot)|73.04|
|HellaSwag (10-Shot) |89.11|
|MMLU (5-Shot) |64.79|
|TruthfulQA (0-shot) |77.48|
|Winogrande (5-shot) |84.77|
|GSM8k (5-shot) |69.22|
|
core-3/kuno-dogwalker-7b | core-3 | 2024-03-04T15:19:50Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"SanjiWatsuki/Kunoichi-DPO-v2-7B",
"mlabonne/AlphaMonarch-7B",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:mlabonne/AlphaMonarch-7B",
"license:cc-by-nc-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-02T23:27:19Z | ---
license: cc-by-nc-2.0
tags:
- merge
- mergekit
- lazymergekit
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- mlabonne/AlphaMonarch-7B
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- mlabonne/AlphaMonarch-7B
model-index:
- name: kuno-dogwalker-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.01
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-dogwalker-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.17
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-dogwalker-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.96
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-dogwalker-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.39
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-dogwalker-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-dogwalker-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-dogwalker-7b
name: Open LLM Leaderboard
---
# kuno-dogwalker-7b
# 🦮🦮🦮🥷
Decent metrics, but writing feels off compared to [kuno-royale-v2-7b](https://huggingface.co/core-3/kuno-royale-v2-7b).
|Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|-------------------|---------|-----|-----------|------|------------|------------|-------|
| eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO | 76.45 | 73.12 | 89.09 | 64.80 | 77.45 | 84.77 | 69.45 |
| mlabonne/AlphaMonarch-7B | 75.99 | 73.04 | 89.18 | 64.40 | 77.91 | 84.69 | 66.72 |
| **core-3/kuno-dogwalker-7b** | **74.94** | **72.01** | **88.17** | **64.96** | **71.39** | **82.00** | **71.11** |
| core-3/kuno-royale-v2-7b | 74.80 | 72.01 | 88.15 | 65.07 | 71.10 | 82.24 | 70.20 |
| core-3/kuno-royale-7B | 74.74 | 71.76 | 88.20 | 65.13 | 71.12 | 82.32 | 69.90 |
| SanjiWatsuki/Kunoichi-DPO-v2-7B | 72.46 | 69.62 | 87.44 | 64.94 | 66.06 | 80.82 | 65.88 |
| SanjiWatsuki/Kunoichi-7B | 72.13 | 68.69 | 87.10 | 64.90 | 64.04 | 81.06 | 67.02 |
kuno-dogwalker-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [0, 32]
- model: mlabonne/AlphaMonarch-7B
layer_range: [0, 32]
merge_method: slerp
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "core-3/kuno-dogwalker-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
cloudyu/mistral_18B_instruct_v0.1 | cloudyu | 2024-03-05T00:28:25Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-05T00:15:16Z | ---
license: cc-by-4.0
---
```
this is a 80 layers model based on mistral architecture
sft by vicgalle/alpaca-gpt4.
template is "{instruction} {inputs} \n {output}"
```
example output
```
<s> write a story about yosemite.
Once upon a time, in the heart of California's Sierra Nevada Mountains, there was a place known as Yosemite National Park. This breathtaking landscape was home to some of nature's most stunning creations - towering redwood trees, crystal-clear streams, and cascading waterfalls that tumbled down from the mountainside.
In this magical land lived a young girl named Emily. She had grown up in the small village at the base of the mountain, surrounded by the beauty of nature. Every day, she would wander through the forests, exploring every nook and cranny of her beloved home.
One day, while wandering along the banks of the Merced River, Emily stumbled across something unusual. A small, shimmering object lay among the rocks. It was a piece of amber, with a trapped insect inside. The insect was an ancient one, unlike any she had ever seen before. It was a prehistoric butterfly, its wings spanned over 6 inches, and it was covered in intricate patterns.
Emily was fascinated by the creature and took it back to her village. She showed it to the elder of the village, who told her that the butterfly was a sign of good luck. He also shared with her the legend of the park, which spoke of a great spirit living within the mountains, protecting all those who lived within its borders.
From that day on, Emily dedicated herself to preserving the natural beauty of Yosemite. She spent her days helping to clean up trash, planting new trees, and teaching others about the importance of taking care of Mother Nature. And as she did so, she felt a sense of peace and happiness like never before.
Years passed, and Emily grew old. But even in her twilight years, she continued to wander the forests of Yosemite, sharing her love for nature with everyone she met. And as she looked out at the beautiful landscapes around her, she knew that she had done her part in keeping the spirit of Yosemite alive.
The end.</s>
``` |
cloudyu/mistral_18B_v0.1 | cloudyu | 2024-03-05T12:40:30Z | 731 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-05T12:29:42Z | ---
license: cc
---
pretrained mistral 18B model
output example
```
<s> what is biggest animal in earth?
The blue whale, which can grow to a length of 100 feet and weigh up to 200 tons, is often considered the largest living animal on Earth. However, it's important to note that there are several species of animals with different measurements for their size, weight, or overall mass, so the question of "what is the biggest animal" is subjective. For example, the African elephant, the giraffe, the hippopotamus, and the rhinoceros all hold claims to being the largest land mammal. In terms of fish, the bluewhale is also surpassed by the bignnhead mintay, a marine creature that grows to an average length of 49 ft (15 m) and a maximum recorded length of 63 ft (19.2 m). So, while the bluewhale may be one of the most well-known large creatures, it is not necessarily the undisputed king of the animal kingdom.</s>
``` |
Corianas/DPO-miniguanaco-1.5T | Corianas | 2024-03-09T02:46:10Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:Intel/orca_dpo_pairs",
"arxiv:1910.09700",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-06T15:32:09Z | ---
language:
- en
license: apache-2.0
library_name: transformers
datasets:
- Intel/orca_dpo_pairs
model-index:
- name: DPO-miniguanaco-1.5T
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 30.63
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Corianas/DPO-miniguanaco-1.5T
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 54.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Corianas/DPO-miniguanaco-1.5T
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Corianas/DPO-miniguanaco-1.5T
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.69
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Corianas/DPO-miniguanaco-1.5T
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Corianas/DPO-miniguanaco-1.5T
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Corianas/DPO-miniguanaco-1.5T
name: Open LLM Leaderboard
---
# Model Card for Model ID
This is a DPO version of Corianas/tiny-llama-miniguanaco-1.5T which was a finetuned version of TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__DPO-miniguanaco-1.5T)
| Metric |Value|
|---------------------------------|----:|
|Avg. |35.13|
|AI2 Reasoning Challenge (25-Shot)|30.63|
|HellaSwag (10-Shot) |54.05|
|MMLU (5-Shot) |24.79|
|TruthfulQA (0-shot) |42.69|
|Winogrande (5-shot) |58.64|
|GSM8k (5-shot) | 0.00|
|
ResplendentAI/Flora_DPO_7B | ResplendentAI | 2024-03-10T05:01:15Z | 731 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:mlabonne/chatml_dpo_pairs",
"dataset:ResplendentAI/Synthetic_Soul_1k",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-07T07:26:55Z | ---
language:
- en
license: cc-by-sa-4.0
library_name: transformers
datasets:
- mlabonne/chatml_dpo_pairs
- ResplendentAI/Synthetic_Soul_1k
model-index:
- name: Flora_DPO_7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_DPO_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.28
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_DPO_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.13
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_DPO_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.08
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_DPO_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_DPO_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.81
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_DPO_7B
name: Open LLM Leaderboard
---
# Flora DPO

Finetuned with this DPO dataset: https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs
Quants available here:
https://huggingface.co/solidrust/Flora-7B-DPO-AWQ
https://huggingface.co/Test157t/ResplendentAI-Flora_DPO_7B-5bpw-exl2
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ResplendentAI__Flora_DPO_7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.26|
|AI2 Reasoning Challenge (25-Shot)|71.76|
|HellaSwag (10-Shot) |88.28|
|MMLU (5-Shot) |64.13|
|TruthfulQA (0-shot) |71.08|
|Winogrande (5-shot) |84.53|
|GSM8k (5-shot) |65.81|
|
Gille/StrangeMerges_34-7B-slerp | Gille | 2024-03-09T02:43:04Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"ContextualAI/Contextual_KTO_Mistral_PairRM",
"Gille/StrangeMerges_30-7B-slerp",
"conversational",
"base_model:ContextualAI/Contextual_KTO_Mistral_PairRM",
"base_model:Gille/StrangeMerges_30-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-07T13:06:35Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- ContextualAI/Contextual_KTO_Mistral_PairRM
- Gille/StrangeMerges_30-7B-slerp
base_model:
- ContextualAI/Contextual_KTO_Mistral_PairRM
- Gille/StrangeMerges_30-7B-slerp
---
# StrangeMerges_34-7B-slerp
StrangeMerges_34-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [ContextualAI/Contextual_KTO_Mistral_PairRM](https://huggingface.co/ContextualAI/Contextual_KTO_Mistral_PairRM)
* [Gille/StrangeMerges_30-7B-slerp](https://huggingface.co/Gille/StrangeMerges_30-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: ContextualAI/Contextual_KTO_Mistral_PairRM
layer_range: [0, 32]
- model: Gille/StrangeMerges_30-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: ContextualAI/Contextual_KTO_Mistral_PairRM
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 0.3, 0.5, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0.7, 0.5, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_34-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ValiantLabs/Esper-70b | ValiantLabs | 2024-03-14T17:45:41Z | 731 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"esper",
"dev-ops",
"developer",
"code",
"code-instruct",
"valiant",
"valiant-labs",
"code-llama",
"llama-2",
"llama-2-chat",
"70b",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-12T13:39:37Z | ---
language:
- en
pipeline_tag: text-generation
tags:
- esper
- dev-ops
- developer
- code
- code-instruct
- valiant
- valiant-labs
- code-llama
- llama
- llama-2
- llama-2-chat
- 70b
model_type: llama
license: apache-2.0
---

Esper-70b is the DevOps code specialist!
- Overall code capabilities with a DevOps focus: specialized in scripting language code, Terraform files, Dockerfiles, YAML, and more!
- Also trained on further code-instruct and chat-instruct data for generally improved chat quality.
- Built on llama-2-70b architecture, using [CodeLlama-70b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf) as the base model.
(If you're looking for a friendly general-purpose chat model, try ours: [llama-13b](https://huggingface.co/ValiantLabs/ShiningValiantXS) and [70b](https://huggingface.co/ValiantLabs/ShiningValiant))
## Version
This is Version **1.0** of Esper-70b.
The current version of Esper-70b uses [CodeLlama-70b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf) trained on two sets of data:
- code from [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup), with our sub-selection focused on scripting languages, Terraform/build scripts, and YAML files.
- our private data for general code-instruct performance, chat-quality response, and user satisfaction. (A portion of this data was also used in [Shining Valiant 1.4](https://huggingface.co/ValiantLabs/ShiningValiant), our previous general-purpose Llama 70b finetune.)
Esper-70b is the newest release in our Build Tools campaign, to deliver helpful open source capabilities for users and creators. We're working on more tools to come! For everyone to use :)
We're planning on continually upgrading this model with more data, to improve existing capabilities and add new ones relevant to a DevOps user base.
## Prompting Guide
Esper-70b uses the following recommended chat format, based on CodeLlama-70b chat format:
Source: system\n\n You are Esper, an expert technical assistant AI. Provide high quality code to the user. <step> Source: user\n\n Hi! Can you explain this Terraform code, thank you:
(Generally, anything that works with [CodeLlama-70b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf) will work with Esper-70b.)

Esper-70b is created by [Valiant Labs.](http://valiantlabs.ca/)
Try our flagship chat model, [Shining Valiant!](https://huggingface.co/ValiantLabs/ShiningValiant)
Check out our function-calling model [Fireplace](https://huggingface.co/ValiantLabs/Fireplace-13b) for Llama-13b!
[Follow us on X for updates on our models!](https://twitter.com/valiant_labs)
We care about open source.
For everyone to use.
We encourage others to finetune further from our models. |
meta-llama/CodeLlama-13b-hf | meta-llama | 2024-03-14T18:41:01Z | 731 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"code",
"arxiv:2308.12950",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-13T19:51:25Z | ---
extra_gated_heading: You need to share contact information with Meta to access this model
extra_gated_prompt: >-
### LLAMA 2 COMMUNITY LICENSE AGREEMENT
"Agreement" means the terms and conditions for use, reproduction, distribution
and modification of the Llama Materials set forth herein.
"Documentation" means the specifications, manuals and documentation
accompanying Llama 2 distributed by Meta at
https://ai.meta.com/resources/models-and-libraries/llama-downloads/.
"Licensee" or "you" means you, or your employer or any other person or entity
(if you are entering into this Agreement on such person or entity's behalf),
of the age required under applicable laws, rules or regulations to provide
legal consent and that has legal authority to bind your employer or such other
person or entity if you are entering in this Agreement on their behalf.
"Llama 2" means the foundational large language models and software and
algorithms, including machine-learning model code, trained model weights,
inference-enabling code, training-enabling code, fine-tuning enabling code and
other elements of the foregoing distributed by Meta at
ai.meta.com/resources/models-and-libraries/llama-downloads/.
"Llama Materials" means, collectively, Meta's proprietary Llama 2 and
documentation (and any portion thereof) made available under this Agreement.
"Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or,
if you are an entity, your principal place of business is in the EEA or
Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA
or Switzerland).
By clicking "I Accept" below or by using or distributing any portion or
element of the Llama Materials, you agree to be bound by this Agreement.
1. License Rights and Redistribution.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-
transferable and royalty-free limited license under Meta's intellectual
property or other rights owned by Meta embodied in the Llama Materials to
use, reproduce, distribute, copy, create derivative works of, and make
modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make the Llama Materials, or any derivative works
thereof, available to a third party, you shall provide a copy of this
Agreement to such third party.
ii. If you receive Llama Materials, or any derivative works thereof, from a
Licensee as part of an integrated end user product, then Section 2 of this
Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute
the following attribution notice within a "Notice" text file distributed as a
part of such copies: "Llama 2 is licensed under the LLAMA 2 Community
License, Copyright (c) Meta Platforms, Inc. All Rights Reserved."
iv. Your use of the Llama Materials must comply with applicable laws and
regulations (including trade compliance laws and regulations) and adhere to
the Acceptable Use Policy for the Llama Materials (available at
https://ai.meta.com/llama/use-policy), which is hereby incorporated by
reference into this Agreement.
v. You will not use the Llama Materials or any output or results of the Llama
Materials to improve any other large language model (excluding Llama 2 or
derivative works thereof).
2. Additional Commercial Terms. If, on the Llama 2 version release date, the
monthly active users of the products or services made available by or for
Licensee, or Licensee's affiliates, is greater than 700 million monthly
active users in the preceding calendar month, you must request a license from
Meta, which Meta may grant to you in its sole discretion, and you are not
authorized to exercise any of the rights under this Agreement unless or until
Meta otherwise expressly grants you such rights.
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA
MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS"
BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING,
WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY
RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING
THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE
LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE
UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE,
PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST
PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR
PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE
POSSIBILITY OF ANY OF THE FOREGOING.
5. Intellectual Property.
a. No trademark licenses are granted under this Agreement, and in connection
with the Llama Materials, neither Meta nor Licensee may use any name or mark
owned by or associated with the other or any of its affiliates, except as
required for reasonable and customary use in describing and redistributing
the Llama Materials.
b. Subject to Meta's ownership of Llama Materials and derivatives made by or
for Meta, with respect to any derivative works and modifications of the Llama
Materials that are made by you, as between you and Meta, you are and will be
the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any
entity (including a cross-claim or counterclaim in a lawsuit) alleging that
the Llama Materials or Llama 2 outputs or results, or any portion of any of
the foregoing, constitutes infringement of intellectual property or other
rights owned or licensable by you, then any licenses granted to you under
this Agreement shall terminate as of the date such litigation or claim is
filed or instituted. You will indemnify and hold harmless Meta from and
against any claim by any third party arising out of or related to your use or
distribution of the Llama Materials.
6. Term and Termination. The term of this Agreement will commence upon your
acceptance of this Agreement or access to the Llama Materials and will
continue in full force and effect until terminated in accordance with the
terms and conditions herein. Meta may terminate this Agreement if you are in
breach of any term or condition of this Agreement. Upon termination of this
Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
4 and 7 shall survive the termination of this Agreement.
7. Governing Law and Jurisdiction. This Agreement will be governed and
construed under the laws of the State of California without regard to choice
of law principles, and the UN Convention on Contracts for the International
Sale of Goods does not apply to this Agreement. The courts of California
shall have exclusive jurisdiction of any dispute arising out of this
Agreement.
USE POLICY
### Llama 2 Acceptable Use Policy
Meta is committed to promoting safe and fair use of its tools and features,
including Llama 2. If you access or use Llama 2, you agree to this Acceptable
Use Policy (“Policy”). The most recent copy of this policy can be found at
[ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).
#### Prohibited Uses
We want everyone to use Llama 2 safely and responsibly. You agree you will not
use, or allow others to use, Llama 2 to:
1. Violate the law or others’ rights, including to:
1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
1. Violence or terrorism
2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
3. Human trafficking, exploitation, and sexual violence
4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
5. Sexual solicitation
6. Any other criminal activity
2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials
7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
2. Engage in, promote, incite, facilitate, or assist in the planning or
development of activities that present a risk of death or bodily harm to
individuals, including use of Llama 2 related to the following:
1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
2. Guns and illegal weapons (including weapon development)
3. Illegal drugs and regulated/controlled substances
4. Operation of critical infrastructure, transportation technologies, or heavy machinery
5. Self-harm or harm to others, including suicide, cutting, and eating disorders
6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
3. Intentionally deceive or mislead others, including use of Llama 2 related
to the following:
1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
3. Generating, promoting, or further distributing spam
4. Impersonating another individual without consent, authorization, or legal right
5. Representing that the use of Llama 2 or outputs are human-generated
6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
4. Fail to appropriately disclose to end users any known dangers of your AI system
Please report any violation of this Policy, software “bug,” or other problems
that could lead to a violation of this Policy through one of the following
means:
* Reporting issues with the model:
[github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
* Reporting risky content generated by the model:
[developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
* Reporting bugs and security concerns:
[facebook.com/whitehat/info](http://facebook.com/whitehat/info)
* Reporting violations of the Acceptable Use Policy or unlicensed uses of
Llama: [[email protected]](mailto:[email protected])
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
language:
- code
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
license: llama2
---
# **Code Llama**
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 13B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.
| | Base Model | Python | Instruct |
| --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| 7B | [meta-llama/CodeLlama-7b-hf](https://huggingface.co/meta-llama/CodeLlama-7b-hf) | [meta-llama/CodeLlama-7b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Python-hf) | [meta-llama/CodeLlama-7b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Instruct-hf) |
| 13B | [meta-llama/CodeLlama-13b-hf](https://huggingface.co/meta-llama/CodeLlama-13b-hf) | [meta-llama/CodeLlama-13b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Python-hf) | [meta-llama/CodeLlama-13b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Instruct-hf) |
| 34B | [meta-llama/CodeLlama-34b-hf](https://huggingface.co/meta-llama/CodeLlama-34b-hf) | [meta-llama/CodeLlama-34b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-34b-Python-hf) | [meta-llama/CodeLlama-34b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-34b-Instruct-hf) |
| 70B | [meta-llama/CodeLlama-70b-hf](https://huggingface.co/meta-llama/CodeLlama-70b-hf) | [meta-llama/CodeLlama-70b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-70b-Python-hf) | [meta-llama/CodeLlama-70b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-70b-Instruct-hf) |
## Model Use
To use this model, please make sure to install transformers:
```bash
pip install transformers accelerate
```
Model capabilities:
- [x] Code completion.
- [x] Infilling.
- [ ] Instructions / chat.
- [ ] Python specialist.
```python
from transformers import AutoTokenizer
import transformers
import torch
model = "meta-llama/CodeLlama-13b-hf"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'import socket\n\ndef ping_exponential_backoff(host: str):',
do_sample=True,
top_k=10,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Model Details
*Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs).
**Model Developers** Meta
**Variations** Code Llama comes in three model sizes, and three variants:
* Code Llama: base models designed for general code synthesis and understanding
* Code Llama - Python: designed specifically for Python
* Code Llama - Instruct: for instruction following and safer deployment
All variants are available in sizes of 7B, 13B and 34B parameters.
**This repository contains the base version of the 13B parameters model.**
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture.
**Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950).
## Intended Use
**Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
## Hardware and Software
**Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster.
**Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program.
## Training Data
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details).
## Evaluation Results
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.
## Ethical Considerations and Limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide). |
MaziyarPanahi/Calme-7B-Instruct-v0.4 | MaziyarPanahi | 2024-04-03T16:55:47Z | 731 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"7b",
"calme",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-14T20:44:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
- mistral
- 7b
- calme
model-index:
- name: Calme-7B-Instruct-v0.4
results: []
model_name: Calme-7B-Instruct-v0.4
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/5fd5e18a90b6dc4633f6d292/LzEf6vvq2qIiys-q7l9Hq.webp" width="550" />
# MaziyarPanahi/Calme-7B-Instruct-v0.4
## Model Description
Calme-7B is a state-of-the-art language model with 7 billion parameters, fine-tuned over high-quality datasets on top of Mistral-7B. The Calme-7B models excel in generating text that resonates with clarity, calmness, and coherence.
### How to Use
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="MaziyarPanahi/Calme-7B-Instruct-v0.4")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Calme-7B-Instruct-v0.4")
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Calme-7B-Instruct-v0.4")
```
### Quantized Models
> I love how GGUF democratizes the use of Large Language Models (LLMs) on commodity hardware, more specifically, personal computers without any accelerated hardware. Because of this, I am committed to converting and quantizing any models I fine-tune to make them accessible to everyone!
- GGUF (2/3/4/5/6/8 bits): [MaziyarPanahi/Calme-7B-Instruct-v0.4-GGUF](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.4-GGUF)
## Examples
```
<s>[INST] You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
describe about pros and cons of docker system. [/INST]
```
<details>
<summary>Show me the response</summary>
```
```
</details>
```
```
<details>
<summary>Show me the response</summary>
```
```
</details>
```
<s> [INST] Mark is faster than Mary, Mary is faster than Joe. Is Joe faster than Mark? Let's think step by step [/INST]
```
<details>
<summary>Show me the response</summary>
```
```
</details>
```
```
<details>
<summary>Show me the response</summary>
```
```
</details>
```
<s> [INST] explain step by step 25-4*2+3=? [/INST]
```
<details>
<summary>Show me the response</summary>
```
```
</details>
**Multilingual:**
```
<s> [INST] Vous êtes un assistant utile, respectueux et honnête. Répondez toujours de la manière la plus utile possible, tout en étant sûr. Vos réponses ne doivent inclure aucun contenu nuisible, contraire à l'éthique, raciste, sexiste, toxique, dangereux ou illégal. Assurez-vous que vos réponses sont socialement impartiales et de nature positive.
Si une question n'a pas de sens ou n'est pas cohérente d'un point de vue factuel, expliquez pourquoi au lieu de répondre quelque chose d'incorrect. Si vous ne connaissez pas la réponse à une question, veuillez ne pas partager de fausses informations.
Décrivez les avantages et les inconvénients du système Docker.[/INST]
```
<details>
<summary>Show me the response</summary>
```
```
<details>
<summary>Show me the response</summary>
```
```
</details>
```
<s>[INST] Ви - корисний, поважний та чесний помічник. Завжди відповідайте максимально корисно, будучи безпечним. Ваші відповіді не повинні містити шкідливого, неетичного, расистського, сексистського, токсичного, небезпечного або нелегального контенту. Будь ласка, переконайтеся, що ваші відповіді соціально неупереджені та мають позитивний характер.
Якщо питання не має сенсу або не є фактично послідовним, поясніть чому, замість того, щоб відповідати щось некоректне. Якщо ви не знаєте відповіді на питання, будь ласка, не діліться неправдивою інформацією.
Опис про переваги та недоліки системи Docker.[/INST]
```
<details>
<summary>Show me the response</summary>
```
```
</details> |
giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1 | giraffe176 | 2024-03-17T16:52:29Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2312.06281",
"arxiv:2311.03099",
"arxiv:2306.05685",
"arxiv:2306.01708",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:mlabonne/AlphaMonarch-7B",
"base_model:cognitivecomputations/WestLake-7B-v2-laser",
"base_model:senseable/garten2-7b",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-15T20:21:00Z | ---
base_model:
- mistralai/Mistral-7B-v0.1
- berkeley-nest/Starling-LM-7B-alpha
- mlabonne/AlphaMonarch-7B
- cognitivecomputations/WestLake-7B-v2-laser
- senseable/garten2-7b
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
model-index:
- name: Starling_Monarch_Westlake_Garten-7B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: EQ-Bench
type: eq-bench
config: EQ-Bench
split: v2.1
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 80.01
name: self-reported
source:
url: https://github.com/EQ-bench/EQ-Bench
name: EQ-Bench v2.1
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.07
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.92
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.16
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1
name: Open LLM Leaderboard
---
# Starling_Monarch_Westlake_Garten-7B-v0.1
<img src="https://cdn-uploads.huggingface.co/production/uploads/655a9883cbbaec115c3fd6b3/Chyn1eXYC0LSY6yVdeRBV.png" alt="drawing" width="800"/>
After experimenting with density for a previous merge (containing similar models), I decided to experiment with weight gradients. My thought that was that if the merge was done with care and attention, I'd be able to create something greater than the sum of its parts.
Hoping that, through a merge of really good models, I'd be able to create something greater than the sum of its parts.
I came across the EQ-Bench Benchmark [(Paper)](https://arxiv.org/abs/2312.06281) as part of my earlier testing. It is a very light and quick benchmark that yields powerful insights into how well the model performs in emotional intelligence related prompts.
As part of this process, I tried to figure out if there was a way to determine an optimal set of gradient weights that would lead to the most successful merge as measured against EQ-Bench. At first, my goal was to simply exceed WestLake-7B, but then I kept pushing to see what I could come up with.
Too late in the process, I learned that [dare_ties](https://arxiv.org/abs/2311.03099) has a random element to it. Valuable information for next time, I guess. After concluding that project, I began collecting more data, this time setting a specified seed in mergekit for reproducibility. As I was collecting data, I hit the goal I had set for myself.
This model is *not* a result of the above work but is the genesis of how this model came to be.
I present, **Starling_Monarch_Westlake_Garten-7B-v0.1**, the **only 7B model to score > 80** on the EQ-Bench v2.1 benchmark found [here](https://github.com/EQ-bench/EQ-Bench), outscoring larger models like [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1) and [cognitivecomputations/dolphin-2.2-70b](https://huggingface.co/cognitivecomputations/dolphin-2.2-70b)
It also surpasses its components in the GSM8K benchmark, with a score of 71.95. I'll be looking to bring out more logic and emotion in the next evolution of this model.
It also earned 8.109 on MT-Bench[(paper)](https://arxiv.org/abs/2306.05685), outscoring Chat-GPT 3.5 and Claude v1.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
The seed for this merge is 176
### Models Merged
The following models were included in the merge:
* [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
* [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser)
* [senseable/garten2-7b](https://huggingface.co/senseable/garten2-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: cognitivecomputations/WestLake-7B-v2-laser
parameters:
density: 0.58
weight: [0.3877, 0.1636, 0.186, 0.0502]
- model: senseable/garten2-7b
parameters:
density: 0.58
weight: [0.234, 0.2423, 0.2148, 0.2775]
- model: berkeley-nest/Starling-LM-7B-alpha
parameters:
density: 0.58
weight: [0.1593, 0.1573, 0.1693, 0.3413]
- model: mlabonne/AlphaMonarch-7B
parameters:
density: 0.58
weight: [0.219, 0.4368, 0.4299, 0.331]
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
### Table of Benchmarks
## Open LLM Leaderboard
| | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|---------------------------------------------------------|---------|-------|-----------|-------|------------|------------|-------|
| giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1 | 74.9 | 71.76 | 88.15 | 65.07 | 67.92 | 84.53 | 71.95 |
| mlabonne/AlphaMonarch-7B | 75.99 | 73.04 | 89.18 | 64.4 | 77.91 | 84.69 | 66.72 |
| senseable/WestLake-7B-v2 | 74.68 | 73.04 | 88.65 | 64.71 | 67.06 | 86.98 | 67.63 |
| berkeley-nest/Starling-LM-7B-alpha | 67.13 | 63.82 | 84.9 | 63.64 | 46.39 | 80.58 | 62.4 |
| senseable/garten2-7b | 72.65 | 69.37 | 87.54 | 65.44 | 59.5 | 84.69 | 69.37 |
## Yet Another LLM Leaderboard benchmarks
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|---------------------------------------------------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1](https://huggingface.co/giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1)| 44.99| 76.93| 68.04| 47.71| 59.42|
|[mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) | 45.37| 77 | 78.39| 50.2 | 62.74|
|[berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) | 42.06| 72.72| 47.33| 42.53| 51.16 |
## Misc. Benchmarks
| | MT-Bench | EQ-Bench v2.1 |
|---------------------------------------------------------|---------------------------------------------|---------------------------------------------------------------------------------|
| giraffe176/Starling_Monarch_Westlake_Garten-7B-v0.1 | 8.109375 | 80.01 (3 Shot, ChatML, ooba) |
| mlabonne/AlphaMonarch-7B | 8.23750 | 76.08 |
| senseable/WestLake-7B-v2 | X | 78.7 |
| berkeley-nest/Starling-LM-7B-alpha | 8.09 | 68.69 (1 Shot, ChatML, ooba) |
| senseable/garten2-7b | X | 75.03 |
| claude-v1 | 7.900000 | 76.83 |
| gpt-3.5-turbo | 7.943750 | 71.74 |
| | [(Paper)](https://arxiv.org/abs/2306.05685) | [(Paper)](https://arxiv.org/abs/2312.06281) [Leaderboard](https://eqbench.com/) |
|
abideen/phi2-pro | abideen | 2024-03-17T10:37:49Z | 731 | 8 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"en",
"dataset:argilla/dpo-mix-7k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-17T06:52:43Z | ---
library_name: transformers
license: apache-2.0
datasets:
- argilla/dpo-mix-7k
language:
- en
---
# Phi2-PRO

*phi2-pro* is a fine-tuned version of **[microsoft/phi-2](https://huggingface.co/microsoft/phi-2)** on **[argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k)**
preference dataset using *Odds Ratio Preference Optimization (ORPO)*. The model has been trained for 1 epoch.
## 💥 LazyORPO
This model has been trained using **[LazyORPO](https://colab.research.google.com/drive/19ci5XIcJDxDVPY2xC1ftZ5z1kc2ah_rx?usp=sharing)**. A colab notebook that makes the training
process much easier. Based on [ORPO paper](https://colab.research.google.com/corgiredirector?site=https%3A%2F%2Fhuggingface.co%2Fpapers%2F2403.07691)

#### 🎭 What is ORPO?
Odds Ratio Preference Optimization (ORPO) proposes a new method to train LLMs by combining SFT and Alignment into a new objective (loss function), achieving state of the art results.
Some highlights of this techniques are:
* 🧠 Reference model-free → memory friendly
* 🔄 Replaces SFT+DPO/PPO with 1 single method (ORPO)
* 🏆 ORPO Outperforms SFT, SFT+DPO on PHI-2, Llama 2, and Mistral
* 📊 Mistral ORPO achieves 12.20% on AlpacaEval2.0, 66.19% on IFEval, and 7.32 on MT-Bench out Hugging Face Zephyr Beta
#### 💻 Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("abideen/phi2-pro", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("abideen/phi2-pro", trust_remote_code=True)
inputs = tokenizer('''
"""
Write a detailed analogy between mathematics and a lighthouse.
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## 🏆 Evaluation
### COMING SOON |
Azure99/blossom-v5-9b | Azure99 | 2024-03-18T04:54:38Z | 731 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"zh",
"en",
"dataset:Azure99/blossom-chat-v3",
"dataset:Azure99/blossom-math-v4",
"dataset:Azure99/blossom-wizard-v3",
"dataset:Azure99/blossom-orca-v3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-18T04:43:12Z | ---
license: apache-2.0
datasets:
- Azure99/blossom-chat-v3
- Azure99/blossom-math-v4
- Azure99/blossom-wizard-v3
- Azure99/blossom-orca-v3
language:
- zh
- en
---
# **BLOSSOM-v5-9b**
[💻Github](https://github.com/Azure99/BlossomLM) • [🚀Blossom Chat Demo](https://blossom-chat.com/)
### What's new?
The Blossom V5 series models is fully trained using high-quality data distilled from gpt-4-0125-preview, resulting in significant improvements.
### Introduction
Blossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Yi-9B pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source.
Training was conducted in two stages. The first stage used 40K Wizard, 40K Orca, 10K Math single-turn instruction datasets, training for 1 epoch; the second stage used 10K Blossom chat multi-turn dialogue dataset, and 10% randomly sampled data from the first stage, training for 3 epochs.
### Inference
Inference is performed in the form of dialogue continuation.
Single-turn dialogue
```
A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions.
|Human|: hello
|Bot|:
```
Multi-turn dialogue
```
A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions.
|Human|: hello
|Bot|: Hello! How can I assist you today?<|endoftext|>
|Human|: Generate a random number using python
|Bot|:
```
Note: At the end of the Bot's output in the historical conversation, append a `<|endoftext|>`. |
automerger/ShadowYamshadow-7B | automerger | 2024-03-19T21:13:39Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:CorticalStack/shadow-clown-7B-slerp",
"base_model:automerger/YamShadow-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-19T21:12:41Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- CorticalStack/shadow-clown-7B-slerp
- automerger/YamShadow-7B
---
# ShadowYamshadow-7B
ShadowYamshadow-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [CorticalStack/shadow-clown-7B-slerp](https://huggingface.co/CorticalStack/shadow-clown-7B-slerp)
* [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: CorticalStack/shadow-clown-7B-slerp
layer_range: [0, 32]
- model: automerger/YamShadow-7B
layer_range: [0, 32]
merge_method: slerp
base_model: CorticalStack/shadow-clown-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/ShadowYamshadow-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
seyf1elislam/WestKunai-Hermes-10.7b-test | seyf1elislam | 2024-03-22T06:17:05Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"base_model:seyf1elislam/WestKunai-Hermes-7b",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-20T04:37:41Z | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
base_model:
- seyf1elislam/WestKunai-Hermes-7b
model-index:
- name: WestKunai-Hermes-10.7b-test
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/WestKunai-Hermes-10.7b-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.10
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/WestKunai-Hermes-10.7b-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/WestKunai-Hermes-10.7b-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 64.28
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/WestKunai-Hermes-10.7b-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/WestKunai-Hermes-10.7b-test
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 51.86
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=seyf1elislam/WestKunai-Hermes-7b
name: Open LLM Leaderboard
---
# WestKunai-Hermes-10.7b-test
Replicate the configuration utilized in the [froggeric/WestLake-10.7B-v2](https://huggingface.co/froggeric/WestLake-10.7B-v2/) model to extend the [WestKunai-Hermes-7b](https://huggingface.co/seyf1elislam/WestKunai-Hermes-7b) model to 10.7b.
# Quantized versions :
- [**GGUF**](https://huggingface.co/seyf1elislam/WestKunai-Hermes-10.7b-test-GGUF)
# comparison
| Metric | diff |Current(10.7b)|Origin(7b)|
|---------------------------------|----:|----:|----:|
|Avg. |<span style="color:red">-3.76</span>|69.75|73.51|
|AI2 Reasoning Challenge (25-Shot)|<span style="color:red">-3.07</span>|68.09|71.16|
|HellaSwag (10-Shot) |<span style="color:red">-0.66</span>|87.10|87.76|
|MMLU (5-Shot) |<span style="color:red">-0.34</span>|64.43|64.77|
|TruthfulQA (0-shot) |<span style="color:red">-0.97</span>|64.28|65.25|
|Winogrande (5-shot) |<span style="color:red">-0.31</span>|82.72|83.03|
|GSM8k (5-shot) |<span style="color:red">-17.21</span>|51.86|69.07|
## Merge Details
### Models Merged
The following models were included in the merge:
* [seyf1elislam/WestKunai-Hermes-7b](https://huggingface.co/seyf1elislam/WestKunai-Hermes-7b)
## Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: seyf1elislam/WestKunai-Hermes-7b
layer_range: [0,9]
- sources:
- model: seyf1elislam/WestKunai-Hermes-7b
layer_range: [5,14]
- sources:
- model: seyf1elislam/WestKunai-Hermes-7b
layer_range: [10,19]
- sources:
- model: seyf1elislam/WestKunai-Hermes-7b
layer_range: [15,24]
- sources:
- model: seyf1elislam/WestKunai-Hermes-7b
layer_range: [20,32]
merge_method: passthrough
dtype: bfloat16
```
## Usage Example
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "seyf1elislam/WestKunai-Hermes-10.7b-test"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_seyf1elislam__WestKunai-Hermes-10.7b-test)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.75|
|AI2 Reasoning Challenge (25-Shot)|68.09|
|HellaSwag (10-Shot) |87.10|
|MMLU (5-Shot) |64.43|
|TruthfulQA (0-shot) |64.28|
|Winogrande (5-shot) |82.72|
|GSM8k (5-shot) |51.86|
|
Changgil/K2S3-Mistral-7b-v1.0 | Changgil | 2024-03-21T01:01:19Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-21T00:50:57Z | ---
license: cc-by-nc-4.0
language:
- en
---
---
## Developed by :
* K2S3
## Model Number:
* K2S3-Mistral-7b-v1.0
## Base Model :
* mistralai/Mistral-7B-v0.1
### Training Data
* The training data for this model includes alpaca-gpt4-data, and samples from The OpenOrca Dataset.
* 이 모델의 훈련 데이터에는 alpaca-gpt4-data, 그리고 OpenOrca Dataset에서 제공한 샘플들이 포함됩니다.
### Training Method
* This model was fine-tuned on the "mistralai/Mistral-7B-v0.1" base model using a full parameter tuning method with SFT (Supervised Fine-Tuning).
* 이 모델은 "mistralai/Mistral-7B-v0.1" 기반 모델을 SFT를 사용하여 전체 파라미터 조정 방법으로 미세조정되었습니다.
### Hardware
* Hardware: Utilized two A100 (80G*2EA) GPUs for training.
* Training Factors: This model was fine-tuned with SFT, using the HuggingFace SFTtrainer and applied fsdp.
* 이 모델은 SFT를 사용하여 HuggingFace SFTtrainer와 fsdp를 적용하여 미세조정되었습니다. |
eldogbbhed/NeuralMonarchCoderPearlBeagle-T3Q-Mistral-Orca-Math-DPO-7b | eldogbbhed | 2024-03-21T07:23:37Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"vortexmergekit",
"chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"eldogbbhed/NeuralMonarchCoderPearlBeagle",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-21T06:17:39Z | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- vortexmergekit
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
- eldogbbhed/NeuralMonarchCoderPearlBeagle
---
# NeuralMonarchCoderPearlBeagle-T3Q-Mistral-Orca-Math-DPO-7b
This is a merge of multiple models brought together using the awesome [VortexMerge kit](https://colab.research.google.com/drive/1YjcvCLuNG1PK7Le6_4xhVU5VpzTwvGhk#scrollTo=UG5H2TK4gVyl).
Let's see what we've got in this merge:
* [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO) 🚀
* [eldogbbhed/NeuralMonarchCoderPearlBeagle](https://huggingface.co/eldogbbhed/NeuralMonarchCoderPearlBeagle) 🚀
## 🧩 Configuration
```yaml
models:
- model: mlabonne/NeuralBeagle14-7B
# no parameters necessary for base model
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
parameters:
density: 0.5
weight: 0.5
- model: eldogbbhed/NeuralMonarchCoderPearlBeagle
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: mlabonne/NeuralBeagle14-7B
parameters:
normalize: true
int8_mask: true
dtype: float16
|
0-hero/Matter-0.1-7B-boost-DPO-preview | 0-hero | 2024-03-23T18:07:36Z | 731 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:0-hero/Matter-0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-21T13:04:58Z | ---
license: apache-2.0
datasets:
- 0-hero/Matter-0.1
language:
- en
---
## Matter 7B - 0.1 Boost - DPO (Mistral 7B Finetune)
DPO version of [Matter 7B Boost](https://huggingface.co/0-hero/Matter-0.1-7B-boost) fine-tuned on the [Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1), which is curated from over 35 datsets analyzing >6B tokens
### Training
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- <|begin_func|> - Function call start token
- <|end_func|> - Function call end token
Function call response tokens
- <|begin_func_response|> - Function response start token
- <|end_func_response|> - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|>
<|im_start|>user
<|begin_func_response|>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}<|end_func_response|>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
``` |
Herry443/Mistral-7B-KNUT-ref-en | Herry443 | 2024-03-24T07:24:23Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-23T18:11:34Z | ---
license: cc-by-4.0
language:
- en
---
### Model Details
- Base Model: [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
### Datasets
- sampling [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
- sampling [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
- sampling [microsoft/orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k)
|
Kukedlc/NeuralMergeTest-001 | Kukedlc | 2024-03-24T19:15:27Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/M7-7b",
"Kukedlc/NeuralKrishna-7B-v3",
"Kukedlc/NeuralMarioMonarch-7B-slerp",
"base_model:liminerity/M7-7b",
"base_model:Kukedlc/NeuralKrishna-7B-v3",
"base_model:Kukedlc/NeuralMarioMonarch-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-24T19:04:16Z | ---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/M7-7b
- Kukedlc/NeuralKrishna-7B-v3
- Kukedlc/NeuralMarioMonarch-7B-slerp
base_model:
- liminerity/M7-7b
- Kukedlc/NeuralKrishna-7B-v3
- Kukedlc/NeuralMarioMonarch-7B-slerp
license: apache-2.0
---
# NeuralMergeTest-001
NeuralMergeTest-001 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [Kukedlc/NeuralKrishna-7B-v3](https://huggingface.co/Kukedlc/NeuralKrishna-7B-v3)
* [Kukedlc/NeuralMarioMonarch-7B-slerp](https://huggingface.co/Kukedlc/NeuralMarioMonarch-7B-slerp)
## 🧩 Configuration
```yaml
models:
- model: liminerity/M7-7b
# no parameters necessary for base model
- model: liminerity/M7-7b
parameters:
density: 0.65
weight: 0.36
- model: Kukedlc/NeuralKrishna-7B-v3
parameters:
density: 0.6
weight: 0.34
- model: Kukedlc/NeuralMarioMonarch-7B-slerp
parameters:
density: 0.4
weight: 0.3
merge_method: dare_ties
base_model: liminerity/M7-7b
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/NeuralMergeTest-001"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
MatthieuJ/Forbin_13B_M1_SLERP | MatthieuJ | 2024-03-25T18:43:25Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B",
"zhengr/MixTAO-7Bx2-MoE-v8.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-25T18:37:59Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B
- zhengr/MixTAO-7Bx2-MoE-v8.1
---
# Forbin_13B_M1_SLERP
Forbin_13B_M1_SLERP is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B](https://huggingface.co/yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B)
* [zhengr/MixTAO-7Bx2-MoE-v8.1](https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B
layer_range: [0, 32]
- model: zhengr/MixTAO-7Bx2-MoE-v8.1
layer_range: [0, 32]
merge_method: slerp
base_model: zhengr/MixTAO-7Bx2-MoE-v8.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
Locutusque/OpenCerebrum-1.0-7b-SFT | Locutusque | 2024-04-13T18:16:17Z | 731 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"open-source",
"code",
"math",
"chemistry",
"biology",
"question-answering",
"en",
"dataset:Open-Orca/SlimOrca",
"dataset:glaiveai/glaive-code-assistant",
"dataset:camel-ai/physics",
"dataset:camel-ai/math",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:grimulkan/theory-of-mind",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:m-a-p/Code-Feedback",
"dataset:Locutusque/arc-cot",
"dataset:jondurbin/airoboros-2.1",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-26T01:23:52Z | ---
language:
- en
license: apache-2.0
tags:
- open-source
- code
- math
- chemistry
- biology
- text-generation
- question-answering
datasets:
- Open-Orca/SlimOrca
- glaiveai/glaive-code-assistant
- camel-ai/physics
- camel-ai/math
- camel-ai/chemistry
- camel-ai/biology
- WizardLM/WizardLM_evol_instruct_V2_196k
- microsoft/orca-math-word-problems-200k
- grimulkan/theory-of-mind
- Vezora/Tested-22k-Python-Alpaca
- m-a-p/Code-Feedback
- Locutusque/arc-cot
- jondurbin/airoboros-2.1
- WizardLM/WizardLM_evol_instruct_70k
pipeline_tag: text-generation
---
# OpenCerebrum-1.0-7B-SFT
OpenCerebrum-1.0-7B-SFT is an open-source language model fine-tuned from the alpindale/Mistral-7B-v0.2-hf base model on a diverse dataset aimed at replicating capabilities of AetherResearch's proprietary Cerebrum model.
The model was fine-tuned on approximately 1.2 million examples across 14 datasets spanning coding, math, science, reasoning, and general instruction-following. The goal was to assemble public datasets that could help the model achieve strong performance on benchmarks where Cerebrum excels.
## Model Details
- **Base Model:** alpindale/Mistral-7B-v0.2-hf
- **Parameters:** 7 billion
- **Fine-Tuning Dataset Size:** ~1,200,000 examples
- **Fine-Tuning Data:** Amalgamation of 14 public datasets
- **Language:** English
- **License:** Apache 2.0
## Intended Use
OpenCerebrum-1.0-7B-SFT is intended to be a powerful open-source model for coding, math, science, and general question-answering and text generation tasks. Its diverse fine-tuning data aims to equip it with broad knowledge and reasoning capabilities.
However, as an open-source replica trained on a subset of data compared to the original Cerebrum, it may not match Cerebrum's full performance. Additionally, biases and limitations of the fine-tuning data may be reflected in the model's outputs.
## Limitations and Biases
- The model may have biases and limitations inherited from its fine-tuning datasets. Thorough testing is needed to characterize these.
- With 1.2 million training examples, the fine-tuning data is still limited compared to the proprietary Cerebrum data.
- As the model is based on a 7B parameter model, it has computational and memory constraints compared to larger models.
## Training Details
The model was fine-tuned on the 14 datasets listed in the Datasets section, totaling approximately 1.2 million examples. Default training hyperparameters were used. In the future, the fine-tuning dataset may be condensed to more closely match the 5,000 example dataset reputedly used for the original Cerebrum model. |
Eurdem/megatron_v3_2x7B | Eurdem | 2024-03-27T12:33:51Z | 731 | 3 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"conversational",
"en",
"tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-27T07:38:41Z | ---
license: apache-2.0
tags:
- moe
- merge
language:
- en
- tr
---
# Megatron_v3_2x7B
Megatron_v3_2x7B is a bilingual Mixure of Experts (MoE) which can comprehend and speak English/Turkish.
Megatron, MoE mimarisine sahip Türkçe ve İngilizce talimatları anlayan ve cevap veren bir modeldir.
## 💻 Usage/Kullanımı
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Eurdem/Megatron_v3_2x7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Tell me about AI"}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=1024, do_sample=True, temperature=0.7, top_k=500, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
grimjim/Mistral-Starling-merge-trial1-7B | grimjim | 2024-03-29T02:15:55Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Nexusflow/Starling-LM-7B-beta",
"base_model:grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-28T15:30:21Z | ---
base_model:
- Nexusflow/Starling-LM-7B-beta
- grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# Mistral-Starling-merge-trial1-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
The goal was to combine strong reasoning with 32K context length.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)
* [grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B](https://huggingface.co/grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
layer_range: [0, 32]
- model: Nexusflow/Starling-LM-7B-beta
layer_range: [0, 32]
# or, the equivalent models: syntax:
# models:
merge_method: slerp
base_model: grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
|
nonetrix/pippafeet-11B-0.1 | nonetrix | 2024-03-30T21:53:45Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"not-for-all-audiences",
"arxiv:2203.05482",
"base_model:Endevor/EndlessRP-v3-7B",
"base_model:sanjiwatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:undi95/Toppy-M-7B",
"base_model:yam-peleg/Experiment30-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-30T06:51:06Z | ---
base_model:
- Endevor/EndlessRP-v3-7B
- sanjiwatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- undi95/Toppy-M-7B
- yam-peleg/Experiment30-7B
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
license: apache-2.0
---
# Pippafeet-11B-0.1
This model is a mix of some of the "best 7B roleplaying LLMs". I selected a few models based on "creativity" from a random benchmark, and a final roleplaying LLM based on "IQ," and finally another LLM merged twice that "excels at general tasks" according to a separate benchmark for it's size. My goal was to combine the "most creative" smaller roleplaying LLMs, merge them, and enhance the intelligence by incorporating "decent general model" twice, along with a "smarter" roleplaying LLM. I don't really trust benchmarks much, but I thought it would at least give it some alignment perhaps, even if it is overfitted to a dataset to score well, I thought since it's a merge so it might negate overfitting somewhat, seems to have worked to some extent luckily.
In my limited testing, this model performs really well, giving decent replies most of the time.... That is if you ignore the fatal flaws, which are inherent to how this model was created unfortunately. Since it's made by directly stacking the weights of other models, it likes to constantly create new words and stutter and generally act stange, however if you ignore this and fill in the blanks yourself the model is quite decent. I plan to try to remove this weirdness if possible by using a LoRA but I am not sure if I will be able to, no promisses. If you have the compute to fine tune this model, I emplore you to because I think it is a promissing base.
Edit: Fine tune is pretty much impossible because ROCm is hot garbage and I should have never bought a AMD GPU, if someone has a functional GPU please fine tune it for me. Might be able to do it on CPU somehow but likely not FP16 and slow as fuck and in GGUF
Artwork source, please contact for me to remove it if wished: https://twitter.com/Kumaartsu/status/1756793643384402070

Note: this model is in no way affiliated with Phase Connect, Pipkin Pippa, or the artists artwork.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [Endevor/EndlessRP-v3-7B](https://huggingface.co/Endevor/EndlessRP-v3-7B)
* [sanjiwatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE](https://huggingface.co/sanjiwatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE)
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [undi95/Toppy-M-7B](https://huggingface.co/undi95/Toppy-M-7B)
* [yam-peleg/Experiment30-7B](https://huggingface.co/yam-peleg/Experiment30-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: linear
parameters:
weight: 1.0
slices:
- sources:
- model: yam-peleg/Experiment30-7B
layer_range: [0, 16]
- sources:
- model: Endevor/EndlessRP-v3-7B
layer_range: [8, 24]
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [17, 24]
- sources:
- model: undi95/Toppy-M-7B
layer_range: [20, 28]
- sources:
- model: sanjiwatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE
layer_range: [28, 30]
- sources:
- model: yam-peleg/Experiment30-7B
layer_range: [29, 32]
dtype: float16
tokenizer_source: model:yam-peleg/Experiment30-7B
``` |
abhishek/autotrain-xva0j-mixtral8x7b | abhishek | 2024-03-31T16:12:28Z | 731 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mixtral",
"text-generation",
"autotrain",
"text-generation-inference",
"conversational",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-31T15:41:10Z | ---
tags:
- autotrain
- text-generation-inference
- text-generation
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
jisukim8873/mistral-7B-alpaca-case-2-2 | jisukim8873 | 2024-04-01T05:31:10Z | 731 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-01T04:28:33Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jeiku/Garbage_9B | jeiku | 2024-04-02T06:54:09Z | 731 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:ResplendentAI/Luna_Alpaca",
"base_model:ChaoticNeutrals/InfinityNexus_9B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-02T05:33:55Z | ---
base_model:
- ChaoticNeutrals/InfinityNexus_9B
- jeiku/luna_lora_9B
library_name: transformers
license: apache-2.0
datasets:
- ResplendentAI/Luna_Alpaca
language:
- en
---
# Garbage

This is a finetune of InfinityNexus_9B. This is my first time tuning a frankenmerge, so hopefully it works out. The goal is to improve intelligence and RP ability beyond the 7B original models. |
arlineka/KittyNyanster-v1 | arlineka | 2024-04-04T19:28:23Z | 731 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"roleplay",
"chat",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-03T00:31:03Z | ---
license: cc-by-nc-4.0
tags:
- roleplay
- chat
- mistral
---
# KittyNyanster-v1
Is a beeg kitty.
# Benchmark

wth LLM Judge by FastChat. |
Ppoyaa/KunoichiVerse-7B | Ppoyaa | 2024-04-03T20:44:58Z | 731 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"custom_code",
"en",
"base_model:Nitral-AI/Kunocchini-7b-128k-test",
"base_model:MTSAIR/multi_verse_model",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-03T18:40:55Z | ---
tags:
- merge
- mergekit
- lazymergekit
base_model:
- Nitral-AI/Kunocchini-7b-128k-test
- MTSAIR/multi_verse_model
license: apache-2.0
language:
- en
---
# KunoichiVerse-7B

# Description
KunoichiVerse-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Nitral-AI/Kunocchini-7b-128k-test](https://huggingface.co/Nitral-AI/Kunocchini-7b-128k-test)
* [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model)
This model uses a context window of 128k. Special thanks to Nitral-AI and MTSAIR for the models.
# GGUF
Special thanks to GGUFs made by [MarsupialAI](https://huggingface.co/MarsupialAI)
* [MarsupialAI/KunoichiVerse-7B_iMatrix_GGUF](https://huggingface.co/MarsupialAI/KunoichiVerse-7B_iMatrix_GGUF)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Nitral-AI/Kunocchini-7b-128k-test
layer_range: [0, 32]
- model: MTSAIR/multi_verse_model
layer_range: [0, 32]
merge_method: slerp
base_model: Nitral-AI/Kunocchini-7b-128k-test
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/KunoichiVerse-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
stablediffusionapi/samaritan-3d-cartoon-v40 | stablediffusionapi | 2024-04-08T09:35:27Z | 731 | 1 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-04-08T09:33:00Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Samaritan 3d Cartoon v4.0 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "samaritan-3d-cartoon-v40"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/samaritan-3d-cartoon-v40)
Model link: [View model](https://modelslab.com/models/samaritan-3d-cartoon-v40)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "samaritan-3d-cartoon-v40",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
mPLUG/TinyChart-3B-768-siglip | mPLUG | 2024-04-26T16:39:53Z | 731 | 2 | transformers | [
"transformers",
"pytorch",
"siglip_vision_model",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T16:38:21Z | Entry not found |
duyntnet/Vistral-7B-Chat-imatrix-GGUF | duyntnet | 2024-05-14T02:38:43Z | 731 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"Vistral-7B-Chat",
"text-generation",
"en",
"license:other",
"region:us"
] | text-generation | 2024-05-14T00:40:41Z | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Vistral-7B-Chat
---
Quantizations of https://huggingface.co/Viet-Mistral/Vistral-7B-Chat
# From original readme
## Usage
To enable single/multi-turn conversational chat with `Vistral-7B-Chat`, you can use the default chat template format:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
system_prompt = "Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn.\n"
system_prompt += "Câu trả lời của bạn không nên chứa bất kỳ nội dung gây hại, phân biệt chủng tộc, phân biệt giới tính, độc hại, nguy hiểm hoặc bất hợp pháp nào. Hãy đảm bảo rằng các câu trả lời của bạn không có thiên kiến xã hội và mang tính tích cực."
system_prompt += "Nếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác. Nếu bạn không biết câu trả lời cho một câu hỏi, hãy trẳ lời là bạn không biết và vui lòng không chia sẻ thông tin sai lệch."
tokenizer = AutoTokenizer.from_pretrained('Viet-Mistral/Vistral-7B-Chat')
model = AutoModelForCausalLM.from_pretrained(
'Viet-Mistral/Vistral-7B-Chat',
torch_dtype=torch.bfloat16, # change to torch.float16 if you're using V100
device_map="auto",
use_cache=True,
)
conversation = [{"role": "system", "content": system_prompt }]
while True:
human = input("Human: ")
if human.lower() == "reset":
conversation = [{"role": "system", "content": system_prompt }]
print("The chat history has been cleared!")
continue
conversation.append({"role": "user", "content": human })
input_ids = tokenizer.apply_chat_template(conversation, return_tensors="pt").to(model.device)
out_ids = model.generate(
input_ids=input_ids,
max_new_tokens=768,
do_sample=True,
top_p=0.95,
top_k=40,
temperature=0.1,
repetition_penalty=1.05,
)
assistant = tokenizer.batch_decode(out_ids[:, input_ids.size(1): ], skip_special_tokens=True)[0].strip()
print("Assistant: ", assistant)
conversation.append({"role": "assistant", "content": assistant })
``` |
saheedniyi/Llama3-8b-Naija_v1 | saheedniyi | 2024-06-04T16:17:40Z | 731 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:saheedniyi/Nairaland_v1_instruct_512QA",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-25T07:00:18Z | ---
library_name: transformers
license: llama3
datasets:
- saheedniyi/Nairaland_v1_instruct_512QA
language:
- en
pipeline_tag: text-generation
---
<!-- Provide a quick summary of what the model is/does. -->
Excited to announce the release of **Llama3-8b-Naija_v1** a finetuned version of Meta-Llama-3-8B trained on a **Question - Answer** dataset from [Nairaland](https://www.nairaland.com/).
The model was built in an attempt to **"Nigerialize"** Llama-3, giving it a Nigerian - like behavior.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Saheedniyi](https://linkedin.com/in/azeez-saheed)
- **Language(s) (NLP):** English, Pidgin English
- **License:** [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/Mozilla/Meta-Llama-3-70B-Instruct-llamafile/blob/main/Meta-Llama-3-Community-License-Agreement.txt)
- **Finetuned from :** [meta-llama/Meta-Llama-3-8B](Mozilla/Meta-Llama-3-70B-Instruct-llamafile)
### Model Sources
<!-- Provide the basic links for the model. -->
- **[Repository](https://github.com/saheedniyi02/Llama3-8b-Naija_v1)**
- **Demo:** [Colab Notebook](https://colab.research.google.com/drive/1Fe65lZOGN7EnV10QW4jhA6oDKf4_PNvJ?usp=sharing)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
#necessary installations
!pip install bitsandbytes peft accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("saheedniyi/Llama3-8b-Naija_v1")
model = AutoModelForCausalLM.from_pretrained("saheedniyi/Llama3-8b-Naija_v1")
input_text = "What are the top places for tourism in Nigeria?"
formatted_prompt = f"### BEGIN CONVERSATION ###\n\n## User: ##\n{input_text}\n\n## Assistant: ##\n"
inputs = tokenizer(formatted_prompt, return_tensors="pt")
outputs = model.generate(**inputs.to("cuda"), max_new_tokens=512,pad_token_id=tokenizer.pad_token_id,do_sample=True,temperature=0.6,top_p=0.9,)
response=tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
when using the model it is important to use the chat template that the model was trained on.
```
prompt = "INPUT YOUR PROMPT HERE"
formatted_prompt=f"### BEGIN CONVERSATION ###\n\n## User: ##\n{prompt}\n\n## Assistant: ##\n"
```
The model has a little tokenization issue and it's necessary to wtrite a function to clean the output to make it cleaner and more presentable.
```
def split_response(text):
return text.split("### END CONVERSATION")[0]
cleaned_response=split_response(response)
print(cleaned_response)
```
**This issue shold be resolved in the next version of the model.** |
bartowski/Llama-3-Ultron-GGUF | bartowski | 2024-05-31T11:56:01Z | 731 | 1 | transformers | [
"transformers",
"gguf",
"text-generation",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-31T11:38:02Z | ---
library_name: transformers
tags: []
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Llama-3-Ultron
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3024">b3024</a> for quantization.
Original model: https://huggingface.co/jayasuryajsk/Llama-3-Ultron
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-Ultron-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Llama-3-Ultron-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Llama-3-Ultron-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Llama-3-Ultron-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Llama-3-Ultron-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama-3-Ultron-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Llama-3-Ultron-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama-3-Ultron-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Llama-3-Ultron-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Llama-3-Ultron-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama-3-Ultron-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Llama-3-Ultron-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Llama-3-Ultron-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Llama-3-Ultron-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Llama-3-Ultron-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Llama-3-Ultron-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-Ultron-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-Ultron-GGUF/blob/main/Llama-3-Ultron-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Llama-3-Ultron-GGUF --include "Llama-3-Ultron-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Llama-3-Ultron-GGUF --include "Llama-3-Ultron-Q8_0.gguf/*" --local-dir Llama-3-Ultron-Q8_0
```
You can either specify a new local-dir (Llama-3-Ultron-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf | RichardErkhov | 2024-06-03T17:44:37Z | 731 | 0 | null | [
"gguf",
"arxiv:2212.04089",
"region:us"
] | null | 2024-06-03T08:50:53Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
internlm2-limarp-chat-20b - GGUF
- Model creator: https://huggingface.co/intervitens/
- Original model: https://huggingface.co/intervitens/internlm2-limarp-chat-20b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [internlm2-limarp-chat-20b.Q2_K.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q2_K.gguf) | Q2_K | 7.03GB |
| [internlm2-limarp-chat-20b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.IQ3_XS.gguf) | IQ3_XS | 7.79GB |
| [internlm2-limarp-chat-20b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.IQ3_S.gguf) | IQ3_S | 8.2GB |
| [internlm2-limarp-chat-20b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q3_K_S.gguf) | Q3_K_S | 8.16GB |
| [internlm2-limarp-chat-20b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.IQ3_M.gguf) | IQ3_M | 8.5GB |
| [internlm2-limarp-chat-20b.Q3_K.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q3_K.gguf) | Q3_K | 9.05GB |
| [internlm2-limarp-chat-20b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q3_K_M.gguf) | Q3_K_M | 9.05GB |
| [internlm2-limarp-chat-20b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q3_K_L.gguf) | Q3_K_L | 9.83GB |
| [internlm2-limarp-chat-20b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.IQ4_XS.gguf) | IQ4_XS | 10.12GB |
| [internlm2-limarp-chat-20b.Q4_0.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q4_0.gguf) | Q4_0 | 10.55GB |
| [internlm2-limarp-chat-20b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.IQ4_NL.gguf) | IQ4_NL | 10.65GB |
| [internlm2-limarp-chat-20b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q4_K_S.gguf) | Q4_K_S | 10.62GB |
| [internlm2-limarp-chat-20b.Q4_K.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q4_K.gguf) | Q4_K | 11.16GB |
| [internlm2-limarp-chat-20b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q4_K_M.gguf) | Q4_K_M | 11.16GB |
| [internlm2-limarp-chat-20b.Q4_1.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q4_1.gguf) | Q4_1 | 11.67GB |
| [internlm2-limarp-chat-20b.Q5_0.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q5_0.gguf) | Q5_0 | 12.79GB |
| [internlm2-limarp-chat-20b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q5_K_S.gguf) | Q5_K_S | 12.79GB |
| [internlm2-limarp-chat-20b.Q5_K.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q5_K.gguf) | Q5_K | 13.11GB |
| [internlm2-limarp-chat-20b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q5_K_M.gguf) | Q5_K_M | 13.11GB |
| [internlm2-limarp-chat-20b.Q5_1.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q5_1.gguf) | Q5_1 | 13.91GB |
| [internlm2-limarp-chat-20b.Q6_K.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q6_K.gguf) | Q6_K | 15.18GB |
| [internlm2-limarp-chat-20b.Q8_0.gguf](https://huggingface.co/RichardErkhov/intervitens_-_internlm2-limarp-chat-20b-gguf/blob/main/internlm2-limarp-chat-20b.Q8_0.gguf) | Q8_0 | 19.66GB |
Original model description:
---
base_model: [intervitens/internlm2-base-20b-llama]
language:
- en
pipeline_tag: text-generation
tags:
- mergekit
- merge
license: other
datasets:
- lemonilia/LimaRP
---
Experimental model, LimaRP LoRA trained on top of internlm2-base-20b with 8192 context length and merged with internlm2-chat-20b.
Prompt format is ChatML.
***
# internlm2-06limarp-1chat-TASK_ARITHM-20b-v0.03
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using intervitens/internlm2-base-20b-llama as a base.
### Models Merged
The following models were included in the merge:
* ./internlm2-limarp-20b-v0.03
* ./internlm2-chat-20b-llama
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ./internlm2-chat-20b-llama
parameters:
weight: 1.0
- model: ./internlm2-limarp-20b-v0.03
parameters:
weight: 0.6
merge_method: task_arithmetic
base_model: ./internlm2-base-20b-llama
parameters:
#normalize: false
#int8_mask: true
dtype: bfloat16
```
|
ArnavDhiman/ai-avatar-generator | ArnavDhiman | 2024-06-26T20:23:21Z | 731 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-06-26T20:17:22Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Ai-avatar-generator Dreambooth model trained by ArnavDhiman with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
NikolayKozloff/RoLlama3-8b-Instruct-Q6_K_L-GGUF | NikolayKozloff | 2024-07-01T18:09:48Z | 731 | 1 | null | [
"gguf",
"text-generation-inference",
"ro",
"region:us"
] | null | 2024-07-01T17:44:47Z | ---
language:
- ro
tags:
- text-generation-inference
---
Best quality quant created using this instruction: https://huggingface.co/bartowski/Phi-3-medium-128k-instruct-GGUF/discussions/3#6679c0ce761779cf45d2321b |
arampacha/gpt-neo-therapist-small | arampacha | 2022-03-31T20:34:26Z | 730 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"onnx",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-03-30T08:40:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: gpt-neo-therapist-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-therapist-small
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6731
- Rouge1: 39.5028
- Rouge2: 6.43
- Rougel: 24.0091
- Rougelsum: 35.4481
- Gen Len: 204.1329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 24
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:|
| 9.9955 | 0.97 | 7 | 6.8195 | 18.6047 | 1.0194 | 14.8565 | 17.9774 | 212.0983 |
| 6.9729 | 1.97 | 14 | 5.6783 | 26.3789 | 3.0779 | 18.5195 | 24.8592 | 203.0925 |
| 5.2614 | 2.97 | 21 | 5.0506 | 34.9428 | 4.921 | 21.9741 | 32.1122 | 206.2775 |
| 5.0599 | 3.97 | 28 | 4.7372 | 38.5235 | 6.2251 | 23.5923 | 34.5633 | 204.2428 |
| 4.5479 | 4.97 | 35 | 4.6731 | 39.5028 | 6.43 | 24.0091 | 35.4481 | 204.1329 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
kabachuha/modelscope-damo-text2video-pruned-weights | kabachuha | 2023-03-23T12:19:20Z | 730 | 39 | open_clip | [
"open_clip",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-03-22T08:42:25Z | ---
license: cc-by-nc-4.0
---
https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis, but with fp16 (half precision) weights
Read all the info here https://huggingface.co/damo-vilab/modelscope-damo-text-to-video-synthesis/blob/main/README.md
|
radames/instruct-pix2pix-img2img | radames | 2023-05-05T20:59:08Z | 730 | 16 | diffusers | [
"diffusers",
"safetensors",
"image-to-image",
"license:mit",
"diffusers:StableDiffusionInstructPix2PixPipeline",
"region:us"
] | image-to-image | 2023-05-05T20:58:32Z | ---
license: mit
duplicated_from: timbrooks/instruct-pix2pix
library_name: diffusers
tags:
- image-to-image
---
# InstructPix2Pix: Learning to Follow Image Editing Instructions
GitHub: https://github.com/timothybrooks/instruct-pix2pix
<img src='https://instruct-pix2pix.timothybrooks.com/teaser.jpg'/>
## Example
To use `InstructPix2Pix`, install `diffusers` using `main` for now. The pipeline will be available in the next release
```bash
pip install diffusers accelerate safetensors transformers
```
```python
import PIL
import requests
import torch
from diffusers import StableDiffusionInstructPix2PixPipeline, EulerAncestralDiscreteScheduler
model_id = "timbrooks/instruct-pix2pix"
pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16, safety_checker=None)
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
url = "https://raw.githubusercontent.com/timothybrooks/instruct-pix2pix/main/imgs/example.jpg"
def download_image(url):
image = PIL.Image.open(requests.get(url, stream=True).raw)
image = PIL.ImageOps.exif_transpose(image)
image = image.convert("RGB")
return image
image = download_image(URL)
prompt = "turn him into cyborg"
images = pipe(prompt, image=image, num_inference_steps=10, image_guidance_scale=1).images
images[0]
``` |
yentinglin/Taiwan-LLM-7B-v2.0.1-chat | yentinglin | 2023-12-01T02:37:20Z | 730 | 32 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"zh",
"arxiv:2311.17487",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-10T16:30:19Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
license: apache-2.0
language:
- zh
widget:
- text: >-
A chat between a curious user and an artificial intelligence assistant.
The assistant gives helpful, detailed, and polite answers to the user's
questions. USER: 你好,請問你可以幫我寫一封推薦信嗎? ASSISTANT:
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Acknowledge license to accept the repository.
extra_gated_prompt: Please contact the author for access.
extra_gated_button_content: Acknowledge license 同意以上內容
extra_gated_fields:
Name: text
Mail: text
Organization: text
Country: text
Any utilization of the Taiwan LLM repository mandates the explicit acknowledgment and attribution to the original author: checkbox
使用Taiwan LLM必須明確地承認和歸功於優必達株式會社 Ubitus 以及原始作者: checkbox
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/CmusIT5OlSXvFrbTJ7l-C.png" alt="Taiwan LLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# 🌟 Checkout [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟
# Model Card for Taiwan LLM 7B v2.0.1 chat
Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan.
Developed from a large base model, it's enriched with diverse Taiwanese textual sources and refined through Supervised Fine-Tuning.
This model excels in language understanding and generation, aligning closely with Taiwan's cultural nuances.
It demonstrates improved performance on various benchmarks like TC-Eval, showcasing its contextual comprehension and cultural relevance.
For detailed insights into Taiwan LLM's development and features, refer to our [technical report](https://github.com/MiuLab/Taiwan-LLaMa/blob/main/twllm_paper.pdf).
## Model description
- **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily Traditional Chinese (zh-tw)
- **Finetuned from model:** [yentinglin/Taiwan-LLM-7B-v2.0-base](https://huggingface.co/yentinglin/yentinglin/Taiwan-LLM-7B-v2.0-base)
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/MiuLab/Taiwan-LLaMa
- **Demo:** https://twllm.com/
## Performance

## Intended uses
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
```python
# pip install transformers>=4.34
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="yentinglin/Taiwan-LLM-7B-v2.0.1-chat", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "你是一個人工智慧助理",
},
{"role": "user", "content": "東北季風如何影響台灣氣候?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
### Training hyperparameters



The following hyperparameters were used during training:
- learning_rate: 5e-05
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5.0
## Citation
If you find Taiwan LLM is useful in your work, please cite it with:
```
@misc{lin2023taiwan,
title={Taiwan LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model},
author={Yen-Ting Lin and Yun-Nung Chen},
year={2023},
eprint={2311.17487},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# Acknowledgement
Taiwan LLM v2 is conducted in collaboration with [Ubitus K.K.](http://ubitus.net). Ubitus provides valuable compute resources for the project.
|
s3nh/nsfw-noromaid-mistral-instruct | s3nh | 2024-01-08T09:26:50Z | 730 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"not-for-all-audiences",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:Azazelle/Half-NSFW_Noromaid-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-07T14:51:27Z | ---
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
- Azazelle/Half-NSFW_Noromaid-7b
tags:
- mergekit
- merge
- not-for-all-audiences
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [Azazelle/Half-NSFW_Noromaid-7b](https://huggingface.co/Azazelle/Half-NSFW_Noromaid-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mistralai/Mistral-7B-Instruct-v0.2
dtype: float16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.22, 0.61, 0.46, 0.77, 1.0]
- filter: mlp
value: [0.78, 0.39, 0.54, 0.23, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 32]
model: Azazelle/Half-NSFW_Noromaid-7b
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.