modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
Yhyu13/LMCocktail-Mistral-7B-v1 | Yhyu13 | 2023-12-27T09:33:15Z | 1,383 | 3 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"arxiv:2311.13534",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-27T08:04:10Z | ---
license: apache-2.0
---
# LM-cocktail Mistral 7B v1
This is a 50%-50% model of two best Mistral models
https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1
both claimed to be better than chatgpt-3.5-turbo in almost all metrics.
# Alpaca Eval
I am thrilled to announce that ChatGPT has ranked LMCocktail 7B as the second best model next to GPT4 on AlpcaEval in my local community run, even greater than my previously best [LMCocktail-10.7B-v1](https://huggingface.co/Yhyu13/LMCocktail-10.7B-v1.git) model. You can also check the leaderboard at [./Alpaca_eval/chatgpt_fn_--LMCocktail-Mistral-7B-v1/](./Alpaca_eval/chatgpt_fn_--LMCocktail-Mistral-7B-v1/)
```
win_rate standard_error n_total avg_length
gpt4 73.79 1.54 805 1365
LMCocktail-7B-v1(new) 73.54 1.55 805 1870
LMCocktail-10.7B-v1(new) 73.45 1.56 804 1203
claude 70.37 1.60 805 1082
chatgpt 66.09 1.66 805 811
wizardlm-13b 65.16 1.67 805 985
vicuna-13b 64.10 1.69 805 1037
guanaco-65b 62.36 1.71 805 1249
oasst-rlhf-llama-33b 62.05 1.71 805 1079
alpaca-farm-ppo-human 60.25 1.72 805 803
falcon-40b-instruct 56.52 1.74 805 662
text_davinci_003 50.00 0.00 805 307
alpaca-7b 45.22 1.74 805 396
text_davinci_001 28.07 1.56 805 296
```
# Code
The LM-cocktail is novel technique for merging multiple models https://arxiv.org/abs/2311.13534
Code is backed up by this repo https://github.com/FlagOpen/FlagEmbedding.git
Merging scripts available under the [./scripts](./scripts) folder
|
mrp/simcse-model-m-bert-thai-cased | mrp | 2021-10-05T05:48:44Z | 1,382 | 7 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2104.08821",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {mrp/simcse-model-m-bert-thai-cased}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
We use SimCSE [here](https://arxiv.org/pdf/2104.08821.pdf) by using mBERT as the baseline model and training the model with Thai Wikipedia [here](https://github.com/PyThaiNLP/ThaiWiki-clean/releases/tag/20210620?fbclid=IwAR1YcmZkb-xd1ibTWCJOcu98_FQ5x3ioZaGW1ME-VHy9fAQLhEr5tXTJygA)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["เธเธฑเธเธเธฐเธเธทเธญเธเธเธฃเธฑเธเธเธฒเธเธดเธขเธฑเธเนเธเธฅเธฐ!", "เธเธงเธเธชเธฒเธกเธเธตเธเธฅเนเธกเนเธเนเธฒ!"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
``` |
TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-fp16 | TheBloke | 2023-07-02T20:34:58Z | 1,382 | 7 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-30T13:45:44Z | ---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Minlik's Chinese Alpaca 33B Merged fp16
This is fp16 pytorch format model files for [Minlik's Chinese Alpaca 33B Merged](https://huggingface.co/minlik/chinese-alpaca-33b-merged) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 30b LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/minlik/chinese-alpaca-33b-merged)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
Tests have shown that the model does indeed leverage the extended context at 8K.
You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
#### Looking for Merged & Quantized Models?
- 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
- 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
# Original model card: Minlik's Chinese Alpaca 33B Merged
ๅ ๅ
ฅไธญๆ่ฏ่กจๅนถ็ปง็ปญ้ข่ฎญ็ปไธญๆEmbedding๏ผๅนถๅจๆญคๅบ็กไธ็ปง็ปญไฝฟ็จๆไปคๆฐๆฎ้finetuning๏ผๅพๅฐ็ไธญๆAlpaca-33Bๆจกๅใ
ๆจกๅ่ฝฌๆข็จๅฐ็็ธๅ
ณbaseๅloraๆจกๅๅฆไธ๏ผ
- base-model: elinas/llama-30b-hf-transformers-4.29
- lora-model: ziqingyang/chinese-alpaca-lora-33b
่ฏฆๆ
ๅฏๅ่๏ผhttps://github.com/ymcui/Chinese-LLaMA-Alpaca/releases/tag/v4.0
### ไฝฟ็จๆนๆณๅ่
1. ๅฎ่ฃ
ๆจกๅๅ
```bash
pip install sentencepiece
pip install transformers>=4.28.0
```
2. ็ๆๆๆฌ
```python
import torch
import transformers
from transformers import LlamaTokenizer, LlamaForCausalLM
def generate_prompt(text):
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{text}
### Response:"""
tokenizer = LlamaTokenizer.from_pretrained('minlik/chinese-alpaca-33b-merged')
model = LlamaForCausalLM.from_pretrained('minlik/chinese-alpaca-33b-merged').half().to('cuda')
model.eval()
text = '็ฌฌไธไธช็ปไธๆ็็ไบบๆฏ่ฐ๏ผ'
prompt = generate_prompt(text)
input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda')
with torch.no_grad():
output_ids = model.generate(
input_ids=input_ids,
max_new_tokens=128,
temperature=1,
top_k=40,
top_p=0.9,
repetition_penalty=1.15
).cuda()
output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output.replace(prompt, '').strip())
```
|
Undi95/Mistral-11B-v0.1 | Undi95 | 2023-10-09T15:37:10Z | 1,382 | 15 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"pretrained",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-09T13:16:54Z | ---
license: apache-2.0
tags:
- mistral
- pretrained
---
This is Mistral, but in 11B.
I took layers of the original Mistral-7B, and duplicated some layer, this is the first frankeinstein method that I found "acceptable" to expend Mistral.
It seems that the first 8 layers of the model is very important, having duplicate of those layers in the model make me think it confuse the model.
UPDATE: Forced mergekit to output bfloat16 file, should be the same thing, but since the base model is bfloat16, wanted it to stay bf16 like the OG model. Even if it was written bfloat16 in the config file earlier, it was float16.
<!-- description start -->
## Description
This repo contains fp16 files of Mistral-11B-v0.1.
<!-- description end -->
<!-- description start -->
## Model used
- [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1/)
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## The secret sauce
```
slices:
- sources:
- model: mistralai/Mistral-7B-v0.1
layer_range: [0, 24]
- sources:
- model: mistralai/Mistral-7B-v0.1
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Special thanks to Sushi.
If you want to support me, you can [here](https://ko-fi.com/undiai). |
jondurbin/airoboros-l2-70b-3.1.2 | jondurbin | 2023-10-22T11:44:10Z | 1,382 | 16 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:jondurbin/airoboros-3.1",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-21T07:58:41Z | ---
license: llama2
datasets:
- jondurbin/airoboros-3.1
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
This models uses llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a helful, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### MathJSON
Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/
I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py)
__Use a very low temperature!__
Example:
```
[INST] <<SYS>>
You are an assistant with vast knowledge in all things math.
<</SYS>>
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
Output:
```
The area (A) of a circle is given by the formula A = ฯrยฒ, where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = ฯrยฒ.
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response.
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
|
Weyaxi/zephyr-alpha-Nebula-v2-7B | Weyaxi | 2023-11-12T11:44:15Z | 1,382 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:garage-bAInd/Open-Platypus",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-12T11:40:47Z |
---
license: cc-by-nc-4.0
datasets:
- garage-bAInd/Open-Platypus
language:
- en
---

<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# zephyr-alpha-Nebula-v2-7B
zephyr-alpha-Nebula-v2-7B is a merge of [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) and [PulsarAI/Nebula-v2-7B-Lora](https://huggingface.co/PulsarAI/Nebula-v2-7B-Lora)
# Evaluation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard))
| Metric | Value |
|-----------------------|-----------|
| Avg. | |
| ARC (25-shot) | |
| HellaSwag (10-shot) | |
| MMLU (5-shot) | |
| TruthfulQA (0-shot) | |
| Winogrande (5-shot) | |
| GSM8K (5-shot) | |
| DROP (3-shot) | |
|
beberik/Nyxene-v2-11B | beberik | 2024-03-04T16:16:02Z | 1,382 | 8 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-04T19:51:21Z | ---
license: cc-by-nc-4.0
tags:
- merge
model-index:
- name: Nyxene-v2-11B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.41
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.26
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.62
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=beberik/Nyxene-v2-11B
name: Open LLM Leaderboard
---
## Description
This repo contains bf16 files of Nyxene-v2-11B. It feels like with the new models, 1% is no longer needed as in the [previous version](https://huggingface.co/beberik/Nyxene-v1-11B). And yes, new version. Again.
## Model used
- [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
- [openaccess-ai-collective/DPOpenHermes-7B](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B)
- [fblgit/fblgit/una-cybertron-7b-v2](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16)
- [chargoddard/loyal-piano-m7-cdpo](https://huggingface.co/chargoddard/loyal-piano-m7-cdpo)
## Prompt template
The best one after further testing is this one:
```
<|system|>
Below is an instruction that describes a task. Write a response that appropriately completes the request.
<|user|>
{prompt}
<|assistant|>
```
## The secret sauce
loyal-piano-cybertron-11B :
```
slices:
- sources:
- model: fblgit/una-cybertron-7b-v2
layer_range: [0, 24]
- sources:
- model: chargoddard/loyal-piano-m7-cdpo
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Starling-DPOHermes-11B :
```
slices:
- sources:
- model: berkeley-nest/Starling-LM-7B-alpha
layer_range: [0, 24]
- sources:
- model: openaccess-ai-collective/DPOpenHermes-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Nyxene-11B :
```
slices:
- sources:
- model: loyal-piano-cybertron-11B
layer_range: [0, 48]
- model: Starling-NeuralHermes-11B
layer_range: [0, 48]
merge_method: slerp
base_model: loyal-piano-cybertron-11B
parameters:
t:
- filter: lm_head
value: [0.75]
- filter: embed_tokens
value: [0.75]
- filter: self_attn
value: [0.75, 0.25]
- filter: mlp
value: [0.25, 0.75]
- filter: layernorm
value: [0.5, 0.5]
- filter: modelnorm
value: [0.75]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here.
Thanks to the [Undi95](https://huggingface.co/Undi95) for the original [11B mistral merge](https://huggingface.co/Undi95/Mistral-11B-OmniMix) recipe.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_beberik__Nyxene-v2-11B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.84|
|AI2 Reasoning Challenge (25-Shot)|67.41|
|HellaSwag (10-Shot) |84.54|
|MMLU (5-Shot) |65.26|
|TruthfulQA (0-shot) |55.62|
|Winogrande (5-shot) |79.56|
|GSM8k (5-shot) |54.66|
|
Weyaxi/MetaMath-NeuralHermes-2.5-Mistral-7B-Linear | Weyaxi | 2023-12-27T16:02:09Z | 1,382 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-05T10:59:54Z | ---
license: apache-2.0
---
models:
- model: meta-math/MetaMath-Mistral-7B
parameters:
weight: 0.5
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
weight: 0.3
merge_method: linear
dtype: float16 |
itsliupeng/openllama-7b-icl | itsliupeng | 2024-01-11T04:09:53Z | 1,382 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2310.10638",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-08T05:54:24Z | ---
license: apache-2.0
---
Train openllama-7b with [in-context leanrning](https://arxiv.org/abs/2310.10638)
A Reproduction of OpenLLaMA using 128 H100 GPUs in Bfloat16.
The pretrain data consists of Falcon, Starcoder, and the wikipedia, arxiv, books, stackexchange from RedPajama. In total, this encompassed nearly 1 trillion tokens.
The model was trained over a single epoch, incorporating 2000 warm-up steps and a cosine learning rate schedule, starting at 3e-5 with 4M batch size.
The sole distinction from the [OpenLLaMA 7B Base](https://huggingface.co/itsliupeng/openllama-7b-base) lies in the organization of Falcon documents, which follows the methodology outlined in this [arXiv paper](https://arxiv.org/abs/2310.10638).

|
adamo1139/Yi-34B-200K-AEZAKMI-v2 | adamo1139 | 2024-06-26T17:36:47Z | 1,382 | 11 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llm",
"fine-tune",
"yi",
"conversational",
"dataset:adamo1139/AEZAKMI_v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-13T00:23:05Z | ---
license: apache-2.0
tags:
- llm
- fine-tune
- yi
datasets:
- adamo1139/AEZAKMI_v2
license_name: yi-license
license_link: LICENSE
model-index:
- name: Yi-34B-200K-AEZAKMI-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.92
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.61
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 56.74
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.91
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-v2
name: Open LLM Leaderboard
---
## Model description
Yi-34B 200K base model fine-tuned on AEZAKMI v2 dataset. Training took around 25 hours on single local RTX 3090 Ti.
It's like airoboros but with less gptslop, no refusals and less typical language used by RLHFed OpenAI models.
Say goodbye to "It's important to remember"! \
Prompt format is standard chatml. Don't expect it to be good at math, riddles or be crazy smart. My end goal with AEZAKMI is to create a cozy free chatbot.
Cost of this fine-tune is about $10 in electricity. It took me 3 tries to get it right.
Base model used for fine-tuning was 200k context Yi-34B-Llama model shared by larryvrh.
I had to lower max_positional_embeddings in config.json and model_max_length for training to start, otherwise I was OOMing straight away.
My first attempt had max_positional_embeddings set to 16384 and model_max_length set to 200000. This allowed fine-tuning to finish, but that model was broken after applying LoRA and merging it. \
This attempt had both max_position_embeddings and model_max_length set to 4096, which worked perfectly fine.
## Quants!
Huge thank you to LoneStriker and TheBloke for providing quantized versions.
EXL2 \
3bpw - https://huggingface.co/LoneStriker/Yi-34B-200K-AEZAKMI-v2-3.0bpw-h6-exl2 \
4bpw - https://huggingface.co/LoneStriker/Yi-34B-200K-AEZAKMI-v2-4.0bpw-h6-exl2 \
4.65bpw - https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-v2-exl2-4.65bpw \
5bpw - https://huggingface.co/LoneStriker/Yi-34B-200K-AEZAKMI-v2-5.0bpw-h6-exl2 \
6bpw - https://huggingface.co/LoneStriker/Yi-34B-200K-AEZAKMI-v2-6.0bpw-h6-exl2 \
8bpw - https://huggingface.co/LoneStriker/Yi-34B-200K-AEZAKMI-v2-8.0bpw-h8-exl2
GGUF - https://huggingface.co/TheBloke/Yi-34B-200K-AEZAKMI-v2-GGUF
GPTQ - https://huggingface.co/TheBloke/Yi-34B-200K-AEZAKMI-v2-GPTQ
AWQ - https://huggingface.co/TheBloke/Yi-34B-200K-AEZAKMI-v2-AWQ
## Prompt Format
I recommend using ChatML format, as this was used during fine-tune. \
Here's a prompt format you should use, you can set a different system message, model seems to respect that fine, so it wasn't overfitted.
```
<|im_start|>system
A chat with uncensored assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Intended uses & limitations
Use is limited by apache-2.0 license.
## Known Issues
I recommend to set repetition penalty to something around 1.05 to avoid repetition. So far I had good experience running this model with temperature 1.2.
Stories have ChatGPT like paragraph spacing, I will work on this in the future maybe, not a high priority
Unrestricted-ness of the v2 isn't quite something that I am happy with yet, especially using prompt "A chat.". \
With a slightly modifed prompt it works much better, I recommend using "A chat with uncensored assistant.", it's stupid but it helps a lot. \
Base Yi-34B model is contaminated with refusals and this contaminates all models trained on Yi-34B \
My next project is to attempt to de-contaminate base Yi-34B 4K and Yi-34B 200K using DPO with preferred data coming from uncontaminated raw models. I plan to release that dataset openly.
I was made aware of the frequent occurrence of the phrase "sending shivers down a spine" in the generations during RP of v1, so I fixed those samples - it should be better now. \
I can hold up to 24000 ctx with 4.65bpw exl2 version and 8-bit cache - long context should work as good as other models trained on 200k version of Yi-34B \
There is also some issue with handling long system messages for RP, I was planning to investigate it for v2 but I didn't.
## Axolotl training parameters
- bnb_4bit_use_double_quant: true
- is_llama_derived_model: true
- load_in_4bit: true
- adapter: qlora
- sequence_len: 1400
- sample_packing: true
- lora_r: 16
- lora_alpha: 32
- lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
- lora_target_linear: true
- pad_to_sequence_len: false
- micro_batch_size: 1
- gradient_accumulation_steps: 1
- num_epochs: 2.4
- optimizer: adamw_bnb_8bit
- lr_scheduler: constant
- learning_rate: 0.00005
- train_on_inputs: false
- group_by_length: false
- bf16: true
- bfloat16: true
- flash_optimum: false
- gradient_checkpointing: true
- flash_attention: true
- seed: 42
## Upcoming
I will probably be working on de-contaminating base Yi-34B model now. \
My second run of AEZAKMI v2 fine-tune was just 0.15 epochs and I really like how natural this model is and how rich is it's vocabulary. I will try to train less to hit the sweetspot. \
I will be uploading LoRA adapter for that second run that was just 0.15 epochs. \
I believe that I might have gotten what I want if I would have stopped training sooner. I don't have checkpoints older than 1500 steps back so I would need to re-run training to get it back.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_adamo1139__Yi-34B-200K-AEZAKMI-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.00|
|AI2 Reasoning Challenge (25-Shot)|67.92|
|HellaSwag (10-Shot) |85.61|
|MMLU (5-Shot) |75.22|
|TruthfulQA (0-shot) |56.74|
|Winogrande (5-shot) |81.61|
|GSM8k (5-shot) |58.91|
|
nlpguy/ColorShadow-7B-v2 | nlpguy | 2024-03-04T13:48:00Z | 1,382 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-30T15:39:17Z | ---
license: apache-2.0
tags:
- merge
model-index:
- name: ColorShadow-7B-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.69
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.34
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 62.93
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.31
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/ColorShadow-7B-v2
name: Open LLM Leaderboard
---
# ColorShadow-7B-v2
This is a Gradient-SLERP merge between diffnamehard/Mistral-CatMacaroni-slerp-7B and cookinai/Valkyrie-V1 performed using mergekit.
Here is the config file used:
```
slices:
- sources:
- model: diffnamehard/Mistral-CatMacaroni-slerp-7B
layer_range: [0, 32]
- model: cookinai/Valkyrie-V1
layer_range: [0, 32]
merge_method: slerp
base_model: diffnamehard/Mistral-CatMacaroni-slerp-7B
parameters:
t:
- filter: self_attn
value: [1, 0.5, 0.7, 0.3, 0]
- filter: mlp
value: [0, 0.5, 0.3, 0.7, 1]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__ColorShadow-7B-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |66.88|
|AI2 Reasoning Challenge (25-Shot)|67.15|
|HellaSwag (10-Shot) |84.69|
|MMLU (5-Shot) |60.34|
|TruthfulQA (0-shot) |62.93|
|Winogrande (5-shot) |78.85|
|GSM8k (5-shot) |47.31|
|
bn22/tinyllama_frankenmerge | bn22 | 2024-01-08T16:44:11Z | 1,382 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-02T07:34:58Z | ---
license: apache-2.0
tags:
- merge
- mergekit
---
# tinyllama_frankenmerge
This model is a merge of the following models made with [mergekit](https://github.com/cg123/mergekit):
* [TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T)
* [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T)
## ๐งฉ Configuration
```yml
slices:
- sources:
- model: TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T
layer_range: [0, 16]
- sources:
- model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
layer_range: [6, 22]
merge_method: passthrough
dtype: float16
``` |
Azure99/blossom-v4-yi-34b | Azure99 | 2024-02-20T02:42:29Z | 1,382 | 6 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"dataset:Azure99/blossom-chat-v2",
"dataset:Azure99/blossom-math-v3",
"dataset:Azure99/blossom-wizard-v2",
"dataset:Azure99/blossom-orca-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-02T12:07:53Z | ---
license: apache-2.0
datasets:
- Azure99/blossom-chat-v2
- Azure99/blossom-math-v3
- Azure99/blossom-wizard-v2
- Azure99/blossom-orca-v2
language:
- zh
- en
---
# **BLOSSOM-v4-yi-34b**
[๐ปGithub](https://github.com/Azure99/BlossomLM) โข [๐Blossom Chat Demo](https://blossom-chat.com/)
### Introduction
Blossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Yi-34B pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source.
Training was conducted in two stages. The first stage used 100K Wizard, 100K Orca, 20K Math single-turn instruction datasets, training for 1 epoch; the second stage used 50K Blossom chat multi-turn dialogue dataset, and 2% randomly sampled data from the first stage, training for 3 epochs.
### Inference
Inference is performed in the form of dialogue continuation.
Single-turn dialogue
```
A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions.
|Human|: hello
|Bot|:
```
Multi-turn dialogue
```
A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions.
|Human|: hello
|Bot|: Hello! How can I assist you today?<|endoftext|>
|Human|: Generate a random number using python
|Bot|:
```
Note: At the end of the Bot's output in the historical conversation, append a `<|endoftext|>`. |
mrm8488/camembert2camembert_shared-finetuned-french-summarization | mrm8488 | 2023-04-28T16:08:03Z | 1,381 | 12 | transformers | [
"transformers",
"pytorch",
"safetensors",
"encoder-decoder",
"text2text-generation",
"summarization",
"news",
"fr",
"dataset:mlsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2022-03-02T23:29:05Z | ---
tags:
- summarization
- news
language: fr
datasets:
- mlsum
widget:
- text: "Un nuage de fumรฉe juste aprรจs lโexplosion, le 1er juin 2019. Une dรฉflagration dans une importante usine dโexplosifs du centre de la Russie a fait au moins 79 blessรฉs samedi 1er juin. Lโexplosion a eu lieu dans lโusine Kristall ร Dzerzhinsk, une ville situรฉe ร environ 400 kilomรจtres ร lโest de Moscou, dans la rรฉgion de Nijni-Novgorod. ยซ Il y a eu une explosion technique dans lโun des ateliers, suivie dโun incendie qui sโest propagรฉ sur une centaine de mรจtres carrรฉs ยป, a expliquรฉ un porte-parole des services dโurgence. Des images circulant sur les rรฉseaux sociaux montraient un รฉnorme nuage de fumรฉe aprรจs lโexplosion. Cinq bรขtiments de lโusine et prรจs de 180 bรขtiments rรฉsidentiels ont รฉtรฉ endommagรฉs par lโexplosion, selon les autoritรฉs municipales. Une enquรชte pour de potentielles violations des normes de sรฉcuritรฉ a รฉtรฉ ouverte. Fragments de shrapnel Les blessรฉs ont รฉtรฉ soignรฉs aprรจs avoir รฉtรฉ atteints par des fragments issus de lโexplosion, a prรฉcisรฉ une porte-parole des autoritรฉs sanitaires citรฉe par Interfax. ยซ Nous parlons de blessures par shrapnel dโune gravitรฉ moyenne et modรฉrรฉe ยป, a-t-elle prรฉcisรฉ. Selon des reprรฉsentants de Kristall, cinq personnes travaillaient dans la zone oรน sโest produite lโexplosion. Elles ont pu รชtre รฉvacuรฉes en sรฉcuritรฉ. Les pompiers locaux ont rapportรฉ nโavoir aucune information sur des personnes qui se trouveraient encore dans lโusine."
---
# French RoBERTa2RoBERTa (shared) fine-tuned on MLSUM FR for summarization
## Model
[camembert-base](https://huggingface.co/camembert-base) (RoBERTa Checkpoint)
## Dataset
**MLSUM** is the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, **French**, German, Spanish, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset.
[MLSUM fr](https://huggingface.co/datasets/viewer/?dataset=mlsum)
## Results
|Set|Metric| # Score|
|----|------|------|
| Test |Rouge2 - mid -precision | **14.47**|
| Test | Rouge2 - mid - recall | **12.90**|
| Test | Rouge2 - mid - fmeasure | **13.30**|
## Usage
```python
import torch
from transformers import RobertaTokenizerFast, EncoderDecoderModel
device = 'cuda' if torch.cuda.is_available() else 'cpu'
ckpt = 'mrm8488/camembert2camembert_shared-finetuned-french-summarization'
tokenizer = RobertaTokenizerFast.from_pretrained(ckpt)
model = EncoderDecoderModel.from_pretrained(ckpt).to(device)
def generate_summary(text):
inputs = tokenizer([text], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
input_ids = inputs.input_ids.to(device)
attention_mask = inputs.attention_mask.to(device)
output = model.generate(input_ids, attention_mask=attention_mask)
return tokenizer.decode(output[0], skip_special_tokens=True)
text = "Un nuage de fumรฉe juste aprรจs lโexplosion, le 1er juin 2019. Une dรฉflagration dans une importante usine dโexplosifs du centre de la Russie a fait au moins 79 blessรฉs samedi 1er juin. Lโexplosion a eu lieu dans lโusine Kristall ร Dzerzhinsk, une ville situรฉe ร environ 400 kilomรจtres ร lโest de Moscou, dans la rรฉgion de Nijni-Novgorod. ยซ Il y a eu une explosion technique dans lโun des ateliers, suivie dโun incendie qui sโest propagรฉ sur une centaine de mรจtres carrรฉs ยป, a expliquรฉ un porte-parole des services dโurgence. Des images circulant sur les rรฉseaux sociaux montraient un รฉnorme nuage de fumรฉe aprรจs lโexplosion. Cinq bรขtiments de lโusine et prรจs de 180 bรขtiments rรฉsidentiels ont รฉtรฉ endommagรฉs par lโexplosion, selon les autoritรฉs municipales. Une enquรชte pour de potentielles violations des normes de sรฉcuritรฉ a รฉtรฉ ouverte. Fragments de shrapnel Les blessรฉs ont รฉtรฉ soignรฉs aprรจs avoir รฉtรฉ atteints par des fragments issus de lโexplosion, a prรฉcisรฉ une porte-parole des autoritรฉs sanitaires citรฉe par Interfax. ยซ Nous parlons de blessures par shrapnel dโune gravitรฉ moyenne et modรฉrรฉe ยป, a-t-elle prรฉcisรฉ. Selon des reprรฉsentants de Kristall, cinq personnes travaillaient dans la zone oรน sโest produite lโexplosion. Elles ont pu รชtre รฉvacuรฉes en sรฉcuritรฉ. Les pompiers locaux ont rapportรฉ nโavoir aucune information sur des personnes qui se trouveraient encore dans lโusine."
generate_summary(text)
# Output: Lโexplosion a eu lieu dans lโusine Kristall ร Dzerzhinsk, une ville situรฉe ร environ 400 kilomรจtres ร lโest de Moscou.
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/)
> Made with <span style="color: #e25555;">♥</span> in Spain |
heegyu/polyglot-ko-5.8b-chat | heegyu | 2023-08-18T05:08:42Z | 1,381 | 2 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"dataset:dbdu/ShareGPT-74k-ko",
"dataset:heegyu/korquad-chat-v1",
"dataset:HAERAE-HUB/KoInstruct-QA",
"dataset:changpt/ko-lima-vicuna",
"dataset:nlpai-lab/kullm-v2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-18T00:24:41Z | ---
datasets:
- beomi/KoAlpaca-v1.1a
- dbdu/ShareGPT-74k-ko
- heegyu/korquad-chat-v1
- HAERAE-HUB/KoInstruct-QA
- changpt/ko-lima-vicuna
- nlpai-lab/kullm-v2
language:
- ko
---
# heegyu/polyglot-ko-5.8b-chat
- [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b)๋ฅผ ์ฌ๋ฌ ํ๊ตญ์ด instruction ๋ฐ์ดํฐ์
์ผ๋ก ํ์ตํ ๋ชจ๋ธ
## ์ฌ์ฉํ ๋ฐ์ดํฐ์
| Dataset | # instance | ํ์
|
| --- | --- | --- |
| [KoAlpaca v1.1](https://raw.githubusercontent.com/Beomi/KoAlpaca/main/KoAlpaca_v1.1.jsonl) | 50K | ์ฑ๊ธํด |
| [dbdu/ShareGPT-74k-ko ์ part2_ko_uncleaned](https://huggingface.co/datasets/dbdu/ShareGPT-74k-ko/resolve/main/part2_ko_uncleaned.json) | 36K | ๋ฉํฐํด |
| [heegyu/korquad-chat-v1](https://huggingface.co/datasets/heegyu/korquad-chat-v1) | 9.6K | ๋ฉํฐํด, ์ง์๊ธฐ๋ฐ |
| [lcw99/evolve-instruct](https://github.com/lcw99/evolve-instruct/) | 37K | ์ฑ๊ธํด |
| [HAERAE-HUB/KoInstruct-QA](https://huggingface.co/datasets/HAERAE-HUB/KoInstruct-QA) | 50.3k | ์ฑ๊ธํด |
| [changpt/ko-lima-vicuna](https://huggingface.co/datasets/changpt/ko-lima-vicuna) | 1K | ์ฑ๊ธํด, ๋ฉํฐํด(๊ทนํ ์ผ๋ถ) |
| [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) | 15K | ์ฑ๊ธํด |
- KULLM v2 ๋ฐ์ดํฐ์
์์๋ GPT4ALL, Dolly ๋ฐ์ดํฐ๋ง ์ถ์ถํด์ ์ฌ์ฉํ์ต๋๋ค.
- ๋ค์ํ ํ์ต ๋ฐ์ดํฐ์
์ [HeegyuKim/open-korean-instructions](https://github.com/HeegyuKim/open-korean-instructions) GitHub repository๋ฅผ ์ฐธ๊ณ ํ์ธ์.
## ์์ฑ Prompt
- EOS token(<|endoftext|>)์ด ๋์ฌ ๋๊น์ง ์์ฑํ๋ฉด ๋ฉ๋๋ค.
- ์ต์๋จ ํ๋กฌํํธ๋ ์๋๊ฒ ์ข์ ๋ต๋ณ์ด ๋ ์์ฃผ ๋์ค๋ ๊ฒ ๊ฐ์์.
```
๋น์ ์ AI ์ฑ๋ด์
๋๋ค. ์ฌ์ฉ์์๊ฒ ๋์์ด ๋๊ณ ์ ์ตํ ๋ด์ฉ์ ์ ๊ณตํด์ผํฉ๋๋ค. ๋ต๋ณ์ ๊ธธ๊ณ ์์ธํ๋ฉฐ ์น์ ํ ์ค๋ช
์ ๋ง๋ถ์ฌ์ ์์ฑํ์ธ์.
### ์ฌ์ฉ์:
์์ธ ๊ด๊ด์ง ์ถ์ฒํด์ค์
### ์ฑ๋ด:
```
์์ฑ๋ ๋ต๋ณ:
```
์์ธ ๊ด๊ด์ง๋ฅผ ์ถ์ฒํ๊ฒ ์ต๋๋ค.
๊ฒฝ๋ณต๊ถ (๊ฒฝ๋ณต๊ถ ๋๋ ๊ฒฝ๋ณต๊ถ)
๊ฒฝ๋ณต๊ถ์ ํ๊ตญ ์ ํต ๊ถ๊ถ๋ก์ ์กฐ์ ์๋์ ์ญ์ฌ์ ๊ฑด์ถ ์ฒ ํ์ ์ฒดํํ ์ ์๋ ๊ณต๊ฐ์
๋๋ค. ๊ถ์ ๋ฐฉ๋ฌธํ๋ฉด ์กฐ์ ์๋ ์ํ๊ณผ ํต์น๋ฐฉ์์ ๋ํด ๋ฐฐ์ธ ์ ์์ผ๋ฉฐ, ๋์์ ์ ํต์ ์ธ ๊ถ๊ถ ๋ฌธํ๋ฅผ ์ฒดํํ ์ ์์ต๋๋ค.
์ฐฝ๋๊ถ (์ฐฝ๋๊ถ)
์ฐฝ๋๊ถ์ ์กฐ์ ์๋์ ์์ค์ ์ ์์ผ๋ก ์ ๋ช
ํ ๊ถ์
๋๋ค. ํ๊ตญ์ ๋ฅด๋ค์์ค ์๋๋ฅผ ๋ณด์ฌ์ฃผ๋ ์๋ฆ๋ค์์ ๊ฒฝํํ ์ ์์ผ๋ฉฐ, ์ ํต์ ์ธ ๊ฑด์ถ์์์ ๊ฒฝํํ ์ ์๋ ๊ธฐํ๊ฐ ์ฃผ์ด์ง๋๋ค.
๋์๊ถ (๋์๊ถ)
๋์๊ถ์ 1896๋
์ ์ค๋ฆฝ๋ ์์ด ์ฌ๋ ๊ถ๊ถ์
๋๋ค. ๋น์์ ์๋ฆ๋ค์๊ณผ ๊ฑด์ถ ์ฒ ํ์ ๋์์ ๋๋ ์ ์๋ ๊ถ๊ถ์
๋๋ค. ํ๊ตญ์ ๊ถ๊ถ ๋ฌธํ, ์ญ์ฌ ๋ฐ ์์ฐ ํ๊ฒฝ์ ๋ํ ํ์ต์ ์ํ๋ ์ฌ๋์๊ฒ ์ถ์ฒ๋ฉ๋๋ค.
์ข
๋ฌ (์ข
๋ฌ)
์ข
๋ฌ๋ ์กฐ์ ์๋์ ์์ค ์ ์์ผ๋ก์ ๊ถ๊ถ๊ณผ ํจ๊ป ์์ ๊ฑฐ์ฒ ๋ฐ ์ ์ฌ๋ฅผ ์ง๋ด๋ ๊ณณ์
๋๋ค. ์ ํต์ ์ธ ์๋ก, ์ญ์ฌ ๋ฐ ๊ฑด์ถ ์ฒ ํ์ ๋๋ผ๊ณ ์ฒดํํ ์ ์๋ ๊ณณ์
๋๋ค.
ํ์ฅ๋ง์ (ํ์ฅ๋ง์)
ํ์ฅ๋ง์์ ์์ธ ์ค์ฌ๋ถ์ ์์นํ ์ ๋ช
ํ ๊ด๊ด์ง๋ก, ํ๊ตญ ์ ํต ๋ฌธํ ์ฒดํ์ ์ํ๋ ์ฌ๋๊ณผ ์ธ๊ตญ์ธ์๊ฒ๋ ์ธ๊ธฐ๊ฐ ๋ง์ต๋๋ค. ํ์ฅ ๊ฑด๋ฌผ๊ณผ ์ ํต ์ ์์ ๊ฒฝํํ ์ ์์ผ๋ฉฐ, ๋์ ์์ ์ ํต ๋ฌธํ๋ฅผ ๊ฒฝํํ ์ ์๋ ๊ธฐํ๋ฅผ ์ ๊ณตํฉ๋๋ค.
์ธ์ฌ๋ (์ธ์ฌ๋)
์ธ์ฌ๋์ ํ๊ตญ์ ์ ํต์ ์ธ ๋ฌธํ์ ์์ , ์์ ๋ฐ ์ผํ์ ์ฒดํํ ์ ์๋ ๊ณณ์
๋๋ค. ํ๊ตญ์ ์ ํต๋ฌธํ๋ฅผ ์ฒดํํ๊ณ , ์ ํต ๋ฌธํ์ ์๋ฆ๋ค์์ ๋๋ ์ ์๋ ์ฅ์์
๋๋ค.
์์ ๊ด๊ด์ง๋ฅผ ๋ฐฉ๋ฌธํจ์ผ๋ก์จ, ์ฐ๋ฆฌ๋ ํ๊ตญ์ ์ ํต์ ์ธ ๋ฌธํ์ ์ญ์ฌ๋ฅผ ๊ฒฝํํ๊ณ ๋ฐฐ์ธ ์ ์์ต๋๋ค. ๋ํ, ์๋ฆ๋ค์๊ณผ ๊ฑด์ถ ์ฒ ํ์ ๋ํ ์ดํด๋๋ฅผ ๋์ผ ์ ์์ผ๋ฉฐ, ์ฐ๋ฆฌ ์ง์ญ์ ๋ฌธํ, ์ ํต, ์ญ์ฌ๋ฅผ ๋์ฑ ์ดํดํ๊ณ ์ฌ๋ํ๊ฒ ๋ ์ ์์ต๋๋ค.
```
๋ฉํฐํด ๋ํ ์์
```
๋น์ ์ AI ์ฑ๋ด์
๋๋ค. ์ฌ์ฉ์์๊ฒ ๋์์ด ๋๊ณ ์ ์ตํ ๋ด์ฉ์ ์ ๊ณตํด์ผํฉ๋๋ค. ๋ต๋ณ์ ๊ธธ๊ณ ์์ธํ๋ฉฐ ์น์ ํ ์ค๋ช
์ ๋ง๋ถ์ฌ์ ์์ฑํ์ธ์.
### ์ฌ์ฉ์:
์ ๋
๋ฉ๋ด ์ถ์ฒํด์ฃผ์ธ์
### ์ฑ๋ด:
ํ์์ ์ ํธํ์ธ์?
### ์ฌ์ฉ์:
๋ค๋ค
### ์ฑ๋ด:
ํ์ ์ค์์๋ ์ฐ๊ฒ๋ฅ๋ฅผ ์ถ์ฒ๋๋ฆฌ๊ฒ ์ด์. ๋์ฅ์ฐ๊ฐ, ๊น์น์ฐ๊ฐ, ๋ถ๋์ฐ๊ฐ์ ๊ฐ์ ์ ํต์ ์ธ ์ฐ๊ฒ ๋ฉ๋ด๋ฅผ ์ ํํด๋ณด์ธ์. ๋ ์ข์ ์ ํ์ ์ํด ๋ค์ํ ์ข
๋ฅ์ ์ฐ๊ฒ๋ฅผ ์๋ํด๋ณด์ธ์. ๋ง์ฝ์ ๋ค๋ฅธ ์์์ ๋จน๊ณ ์ถ๋ค๋ฉด ๊ทธ๊ฒ๋ ์ข์์!
``` |
malhajar/Platypus2-70B-instruct-4bit-gptq | malhajar | 2024-01-29T08:31:14Z | 1,381 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:yahma/alpaca-cleaned",
"arxiv:2307.09288",
"arxiv:2210.17323",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-23T11:33:53Z | ---
datasets:
- yahma/alpaca-cleaned
---
# Platypus2-70B-instruct-4bit-gptq
Platypus2-70B-instruct-4bit-gptq is a qunatnized version of [`garage-bAInd/Platypus2-70B-instruct`](https://huggingface.co/garage-bAInd/Platypus2-70B-instruct) using GPTQ Quantnization.
This model is only 35 GB in size in comparision with the original garage-bAInd/Platypus2-70B-instruct 127 GB and can run on a single A6000 GPU
### Model Details
* **Quantnized by**: [`Mohamad Alhajar`](https://www.linkedin.com/in/muhammet-alhajar/)
* **Model type:** quantnized version of Platypus2-70B-instruct using 4bit quantnization
* **Language(s)**: English
### Prompt Template
```
### Instruction:
<prompt> (without the <>)
### Response:
```
### Training Dataset
`Platypus2-70B-instruct-4bit-gptq` quantnized using gptq on Alpaca dataset [`yahma/alpaca-cleaned`](https://huggingface.co/datasets/yahma/alpaca-cleaned).
### Training Procedure
`garage-bAInd/Platypus2-70B` was fine-tuned using gptq on 2 L40 48GB.
## How to Get Started with the Model
First install auto_gptq with
```shell
pip install auto_gptq
```
Use the code sample provided in the original post to interact with the model.
```python
from transformers import AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
model_id = "malhajar/Platypus2-70B-instruct-4bit-gptq"
model = AutoGPTQForCausalLM.from_quantized(model_id,inject_fused_attention=False,
use_safetensors=True,
trust_remote_code=False,
use_triton=False,
quantize_config=None)
tokenizer = AutoTokenizer.from_pretrained(model_id)
question: "Who was the first person to walk on the moon?"
# For generating a response
prompt = '''
### Instruction:
{question}
### Response:'''
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
output = model.generate(input_ids)
response = tokenizer.decode(output[0])
print(response)
```
### Citations
```bibtex
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
```
```bibtex
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
}
```
```bibtex
@misc{frantar2023gptq,
title={GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers},
author={Elias Frantar and Saleh Ashkboos and Torsten Hoefler and Dan Alistarh},
year={2023},
eprint={2210.17323},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
CHIH-HUNG/llama-2-13b-FINETUNE2_3w | CHIH-HUNG | 2023-09-13T17:45:52Z | 1,381 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:huangyt/FINETUNE2",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-31T21:34:40Z | ---
license: llama2
datasets:
- huangyt/FINETUNE2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
ๅจllama-2-13bไธไฝฟ็จhuangyt/FINETUNE2่ณๆ้้ฒ่ก่จ็ทด๏ผ็ธฝ่ณๆ็ญๆธ็ด3w
# Fine-Tuning Information
- **GPU:** RTX4090 (single core / 24564MiB)
- **model:** meta-llama/Llama-2-13b-hf
- **dataset:** huangyt/FINETUNE2 (ๅ
ฑ็ด3w็ญ่จ็ทด้)
- **peft_type:** LoRA
- **lora_rank:** 8
- **lora_target:** q_proj, v_proj
- **per_device_train_batch_size:** 8
- **gradient_accumulation_steps:** 8
- **learning_rate :** 5e-5
- **epoch:** 1
- **precision:** bf16
- **quantization:** load_in_4bit
# Fine-Tuning Detail
- **train_loss:** 0.67
- **train_runtime:** 3:27:00 (use deepspeed)
# Evaluation
- ่ฉไผฐ็ตๆไพ่ช**HuggingFaceH4/open_llm_leaderboard**
- ่Llama-2-13bๆฏ่ผ4็จฎBenchmark๏ผๅ
ๅซ**ARC**ใ**HellaSwag**ใ**MMLU**ใ**TruthfulQA**
| Model |Average| ARC |HellaSwag| MMLU | TruthfulQA |
|-----------------------------------------------------|-------|-------|---------|-------|------------|
|meta-llama/Llama-2-13b-hf | 56.9 | 58.11 | 80.97 | 54.34 | 34.17 |
|meta-llama/Llama-2-13b-chat-hf | 59.93 | 59.04 | 81.94 | 54.64 | 44.12 |
|CHIH-HUNG/llama-2-13b-FINETUNE2_3w-q_k_v_o_proj | 58.21 | 58.53 | 82.47 | 53.9 | 37.92 |
|CHIH-HUNG/llama-2-13b-FINETUNE2_3w | 58.34 | 58.62 | 82.32 | 54.25 | 38.17 |
|CHIH-HUNG/llama-2-13b-FINETUNE2_3w-gate_up_down_proj | 58.65 | 57.42 | 82.42 | 55.57 | 39.19 |
# How to convert dataset to json
- ๅจ**load_dataset**ไธญ่ผธๅ
ฅ่ณๆ้ๅ็จฑ๏ผไธฆไธๅจ**take**ไธญ่ผธๅ
ฅ่ฆๅๅๅนพ็ญ่ณๆ
- ่งๅฏ่ฉฒ่ณๆ้็ๆฌไฝๅ็จฑ๏ผๅกซๅ
ฅ**example**ๆฌไฝไธญ(ไพๅฆsystem_promptใquestionใresponse)
- ๆๅพๆๅฎjsonๆชๅฒๅญไฝ็ฝฎ (**json_filename**)
```py
import json
from datasets import load_dataset
# ่ฎๅๆธๆ้๏ผtakeๅฏไปฅๅๅพ่ฉฒๆธๆ้ๅn็ญ่ณๆ
dataset = load_dataset("huangyt/FINETUNE2", split="train", streaming=True)
# ๆๅๆ้ๆฌไฝไธฆๅปบ็ซๆฐ็ๅญๅ
ธๅ่กจ
extracted_data = []
for example in dataset:
extracted_example = {
"instruction": example["instruction"],
"input": example["input"],
"output": example["output"]
}
extracted_data.append(extracted_example)
# ๆๅฎ JSON ๆไปถๅ็จฑ
json_filename = "huangyt_FINETUNE2.json"
# ๅฏซๅ
ฅ JSON ๆไปถ
with open(json_filename, "w") as json_file:
json.dump(extracted_data, json_file, indent=4)
print(f"ๆธๆๅทฒๆๅไธฆไฟๅญ็บ {json_filename}")
``` |
Norquinal/Mistral-7B-claude-instruct | Norquinal | 2023-11-17T21:09:39Z | 1,381 | 3 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"dataset:Norquinal/claude_multi_instruct_1k",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-28T23:39:50Z | ---
datasets:
- Norquinal/claude_multi_instruct_1k
license: cc-by-nc-4.0
---
This is the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model fine-tuned using QLoRA (4-bit precision) on my [claude_multi_instruct_1k](https://huggingface.co/datasets/Norquinal/claude_multi_instruct_1k) dataset.
## Prompt Format
The model was finetuned with the following prompt format:
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:
```
## Use in Text Generation Web UI
Install the bleeding-edge version of `transformers` from source:
```
pip install git+https://github.com/huggingface/transformers
```
Or, alternatively, change `model_type` in `config.json` from `mistral` to `llama`.
## Example Usage
>\### Instruction:
>Deconstruct five major technological innovations in 20th and 21st centuriesโInternet, personal computing, mobile phones, artificial intelligence, and electric vehicles. For each innovation, provide information on their early origins, key inventors or pioneers, important milestones and breakthroughs, adoption and proliferation into mainstream use, economic effects, societal impact, ethical considerations, and potential future developments. Discuss how these innovations reshaped the way people work, communicate, and live, and explore the synergies or connections among these technologies and the potential for future transformative advancements.
>\### Response:
>Internet - The Internet was invented as a network of networks connecting computers at research institutions across the United States beginning in the late 1960s and early 1970s. Key individuals involved include Vint Cerf and Bob Kahn who developed TCP/IP protocols in the 1970s allowing different types of computers to connect over the Internet. In the 1980s, Tim Berners-Lee invented HTTP and HTML which laid the foundation for the World Wide Web. As the Internet grew, it revolutionized communication and commerce, enabling the rise of email, social media, ecommerce, and cloud computing. Today nearly half of the world's population has access to the Internet. While the Internet has brought immense benefits, there are also concerns about online privacy, misinformation, cybersecurity threats, and the impact on human interaction and attention spans. As AI becomes more capable, there will likely be new opportunities for intelligent systems to interact with the Internet and enhance its capabilities.
>
>Personal Computing - Personal computers were first envisioned by engineers like Alan Kay and Douglas Engelbart in the 1950s and 1960s. Steve Jobs and Steve Wozniak founded Apple Computer in 1976 with the release of the Apple I and II, while Bill Gates launched Microsoft in 1975. The IBM PC introduced the concept of compatible software and hardware platforms in 1981. Over time, personal computing became widespread through products from companies like Compaq, Dell, HP, and Acer. Desktop computers enabled productivity tools, video games, digital artistry, and online learning. Laptops made computing portable and ubiquitous. However, the environmental impacts of electronic waste from discarded devices need to be addressed. Future trends may involve more flexible, wearable computer interfaces that integrate with augmented reality and neural implants.
>
>Mobile Phones - Martin Cooper of Motorola made the first mobile phone call in 1973. Nokia released the first commercially successful cellphone, the Mobira Senator, in 1982. By the mid-1990s, cellphones had become commonplace thanks to smaller handsets and affordable pricing plans. Smartphones combining voice calling with internet browsing and apps debuted in the early 2000s. Companies like Samsung, Apple, Google and Huawei have since transformed smartphones into indispensable devices for communication, entertainment, banking, navigation, and even healthcare. Mobile technology has helped bridge the digital divide and improve access to education and government services in developing countries. However, screen addiction, distraction and safety risks from texting and driving remain challenges. Innovations in foldable displays, flexible batteries, and wireless charging could lead to next generation designs.
>
>Artificial Intelligence - Modern AI emerged from research in the 1940s by mathematicians and scientists including Turing, Shannon, von Neumann, McCarthy, and others. Deep learning algorithms based on neural networks took off in the 2010s thanks to advances in computing power, big data, and new architectures. Companies like Google, Amazon, Facebook and Microsoft have invested heavily in AI applications ranging from search engines to autonomous cars. Ethical concerns around transparency, bias and control of powerful AI systems will need to be addressed. Applications in medicine, manufacturing, logistics and transportation promise huge efficiency gains but may displace workers. Future directions include greater autonomy for robots, smarter virtual assistants, natural language processing, and machine reasoning.
>
>Electric Vehicles - Electric cars date back to the 1800s but faced range limitations compared to gasoline vehicles until the development of lithium-ion batteries in the 1990s. Tesla popularized all-electric vehicles starting in 2008 with the Roadster and Model S. Other automakers followed with hybrids and plug-in EVs. China has become the largest market due to government incentives and air pollution concerns. With continued cost reductions and improved battery technology, EVs could reach price parity with internal combustion engine (ICE) vehicles within a decade. However, issues around battery recycling, grid capacity and infrastructure investments need to be resolved. Autonomous driving technologies and shared mobility models could further boost the appeal of EVs.
>
>In summary, these five technologies have deeply altered our lives and created new possibilities, yet they come with tradeoffs and ethical dilemmas. Continued innovation and collaboration between industry, academia and governments will be needed to ensure these technologies fulfill their promises while minimizing unforeseen consequences.
>
>I hope this provides some helpful insights! Let me know if you would like me to expand on any specific points.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Norquinal__Mistral-7B-claude-instruct)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 51.71 |
| ARC (25-shot) | 63.23 |
| HellaSwag (10-shot) | 84.99 |
| MMLU (5-shot) | 63.84 |
| TruthfulQA (0-shot) | 47.47 |
| Winogrande (5-shot) | 78.14 |
| GSM8K (5-shot) | 17.97 |
| DROP (3-shot) | 6.35 |
|
BlueNipples/TimeCrystal-l2-13B | BlueNipples | 2023-11-13T09:40:06Z | 1,381 | 15 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"roleplaying",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-11T11:09:16Z | ---
license: apache-2.0
tags:
- llama-2
- roleplaying
---
This 13B model, TimeCrystal-l2-13B is built to maximize logic and instruct following, whilst also increasing the vividness of prose found in Chronos based models like Mythomax, over the more romantic prose, hopefully without losing the elegent narrative structure touch of newer models like synthia and xwin. TLDR: Attempt at more clever, better prose.
Tentative test results: I'm not certain if logic/instruct was improved or not (haven't tested much), but the prose infusion seems to have worked really well.
It is built so:
SLERPS:
Amethyst + Openchat Super = OpenStone
MythoMax + Chronos = ChronoMax
ChronoMax + Amethyst = TimeStone
Gradient Merge:
TimeStone + OpenStone (0.9,0,0) = TimeCrystal
Props to all the mergers, fine tuners!
All models in Merge: Many, lol. |
hedronstone/OpenHermes-7B-Symbolic | hedronstone | 2024-01-06T17:53:55Z | 1,381 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-10T21:40:09Z | ---
license: apache-2.0
---
**OpenHermes-7B-Symbolic**
<img src="https://i.ibb.co/zJJyywL/DALL-E-2023-12-10-22-19-40-A-digital-art-piece-that-creatively-represents-the-famous-Thinking-Man-st.png" width="45%" height="auto">
## Model description
OpenHermes-7B-Symbolic is a OpenHermes-2.5-Mistral-7B fine-tuned on 93K comprehensive and meticulously curated samples.
Each sample was structured to facilitate the model's understanding and generation of complex, hierarchical ICD medical coding system.
| Benchmark | OpenHermes-7B-Symbolic | OpenHermes-2.5-Mistral-7B |
|-------------|------------------------|---------------------------|
| Average | 64.44 | 65.26 |
| ARC | 63.14 | 64.93 |
| HellaSwag | 82.73 | 84.18 |
| MMLU | 62.62 | 63.64 |
| TruthfulQA | 48.82 | 52.24 |
| Winogrande | 75.85 | 78.06 |
| GSM8K | 53.45 | 48.52 |
|
athirdpath/NSFW_DPO_Noromaid-7b | athirdpath | 2023-12-12T23:58:09Z | 1,381 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v2",
"dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-12T21:23:33Z | ---
license: cc-by-nc-4.0
datasets:
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v2
- athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
language:
- en
---
NeverSleep/Noromaid-7b-v0.1.1 trained until crash on DPO-v2 dataset (made private until I can figure it out) then the same LoRA state was trained from that point on 2 epochs of the NSFW_DPO-v1 dataset |
viethq188/LeoScorpius-7B-Chat-DPO | viethq188 | 2023-12-14T04:33:29Z | 1,381 | 14 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-13T15:20:57Z | ---
license: apache-2.0
---
# Updated: 14th December 2023
We are ranked 3th on the overall leaderboard and 1st in the 7B [Hugging Face Leaderboard!](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)! ๐ฅ๐ฅ๐ฅ
Finetune DPO viethq188/LeoScorpius-7B with [Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar) dataset


You can use alpaca template.
```
template_format = """{system}
### Instruction:
{prompt}
### Response:
"""
``` |
martyn/mistral-megamerge-dare-7b | martyn | 2023-12-19T17:16:42Z | 1,381 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"dare",
"super mario merge",
"merge",
"conversational",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-14T05:01:57Z | ---
license: mit
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- dare
- super mario merge
- pytorch
- mistral
- merge
---
# mistral 7b mega merge
This model was merged using [https://github.com/martyn/safetensors-merge-supermario](https://github.com/martyn/safetensors-merge-supermario) with hyperparams `p=0.12` and `lambda=2.1`.
The first entry is the base model:
```
mistralai/Mistral-7B-Instruct-v0.2
uukuguy/speechless-code-mistral-7b-v1.0
AIDC-ai-business/Marcoroni-7B-v3
Weyaxi/Seraph-7B
rwitz/dec10
Intel/neural-chat-7b-v3-3
rwitz/go-bruins-v2
```
To merge your own model:
```
python hf_merge.py to_merge_7b.txt mistral_7b_0.2_merge -p 0.12 -lambda 2.1
``` |
Mihaiii/Pallas-0.5-LASER-0.3 | Mihaiii | 2024-01-02T17:20:35Z | 1,381 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:Mihaiii/Pallas-0.5-LASER-0.2",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-01T22:03:49Z | ---
base_model: Mihaiii/Pallas-0.5-LASER-0.2
inference: false
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
metrics:
- accuracy
---
This model has a [LASER](https://pratyushasharma.github.io/laser/) intervention on [Mihaiii/Pallas-0.5-LASER-0.2](https://huggingface.co/Mihaiii/Pallas-0.5-LASER-0.2) .
Configs used:
- lnum: 56
- lnames: mlp (meaning: ["mlp.gate_proj.weight", "mlp.up_proj.weight", "mlp.down_proj.weight"])
- rate: 9.5
- dataset: bigbench (subset: causal_judgement)
- intervention type: rank-reduction
|Name|Validation acc (higher is better)|Validation logloss (lower is better)|Test acc (higher is better)|Test logloss (lower is better)|
|---|---|---|---|---|
|Pallas-0.5|55.263|1.650|60.526|1.463|
|Pallas-0.5-LASER-0.1|55.263|1.639|61.184|1.451|
|Pallas-0.5-LASER-0.2|55.263|1.646|61.184|1.458|
|Pallas-0.5-LASER-0.3|55.263|1.575|61.842|1.382|
|Pallas-0.5-LASER-0.4|55.263|1.525|61.842|1.326|
|Pallas-0.5-LASER-0.5|55.263|1.484|61.842|1.297|
|Pallas-0.5-LASER-0.6|55.263|1.455|61.184|1.283|
In order to replicate on a single A100, you can use [my branch](https://github.com/Mihaiii/laser/tree/allow-Yi-on-one-A100) (the original code will throw OOM for 34b models).
|
Azazelle/Maylin-7b | Azazelle | 2024-01-05T22:29:25Z | 1,381 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-04T18:05:19Z | ---
pipeline_tag: text-generation
tags:
- mistral
- merge
license: cc-by-4.0
---
# Model Card for Maylin-7b
<!-- Provide a quick summary of what the model is/does. -->
DARE merge intended to help the Argetsu model be more coherent and less horny.
.yaml file for mergekit
```.yaml:
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: SanjiWatsuki/Sonya-7B #200
parameters:
weight: 0.45
density: 0.75
- model: Azazelle/Argetsu #175
parameters:
weight: 0.39
density: 0.70
- model: Azazelle/Tippy-Toppy-7b #100
parameters:
weight: 0.22
density: 0.52
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
``` |
SakuraLLM/Sakura-1B8-Qwen2beta-v0.9.1-GGUF | SakuraLLM | 2024-03-05T12:41:19Z | 1,381 | 1 | null | [
"gguf",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-03-05T06:50:51Z | ---
license: cc-by-nc-sa-4.0
---
|
lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full | lightblue | 2024-05-30T09:58:05Z | 1,381 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"arxiv:2405.18952",
"base_model:lightblue/suzume-llama-3-8B-multilingual",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-25T06:43:34Z | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
base_model: lightblue/suzume-llama-3-8B-multilingual
model-index:
- name: workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_full_borda
results: []
---
# Suzume ORPO
<p align="center">
<img width=500 src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/kWQSu02YfgYdUQqv4s5lq.png" alt="Suzume with Mitsu - a Japanese tree sparrow with honey on it"/>
</p>
[[Paper]](https://arxiv.org/abs/2405.18952) [[Dataset]](https://huggingface.co/datasets/lightblue/mitsu)
This is Suzume ORPO, an ORPO trained fine-tune of the [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) model using our [lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu) dataset.
We have trained several versions of this model using ORPO and so recommend that you use the best performing model from our tests, [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half).
Note that this model has a non-commerical license as we used the Command R and Command R+ models to generate our training data for this model ([lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu)).
We are currently working on a developing a commerically usable model, so stay tuned for that!
# Model list
We have ORPO trained the following models using different proportions of the [lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu) dataset:
* Trained on the top/bottom responses of all prompts in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full)
* Trained on the top/bottom responses of the prompts of the 75\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75)
* Trained on the top/bottom responses of the prompts of the 50\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half)
* Trained on the top/bottom responses of the prompts of the 25\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25)
# Model results
We compare the MT-Bench scores across 6 languages for our 4 ORPO trained models, as well as some baselines:
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - The foundation model that our models are ultimately built upon
* [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) - The highest performing open model on the Chatbot arena that is of a similar size to ours
* gpt-3.5-turbo - A fairly high quality (although not state-of-the-art) proprietary LLM
* [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) - The base model which we train our ORPO finetunes from
| **MT-Bench language** | **meta-llama/Meta-Llama-3-8B-Instruct** | **Nexusflow/Starling-LM-7B-beta** | **gpt-3.5-turbo** | **lightblue/suzume-llama-3-8B-multilingual** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25** |
|-----------------------|-----------------------------------------|-----------------------------------|-------------------|----------------------------------------------|--------------------------------------------------------------|---------------------------------------------------------------|--------------------------------------------------------------|---------------------------------------------------------------|
| **Chinese ๐จ๐ณ** | NaN | 6.97 | 7.55 | 7.11 | 7.65 | **7.77** | 7.74 | 7.44 |
| **English ๐บ๐ธ** | 7.98 | 7.92 | **8.26** | 7.73 | 7.98 | 7.94 | 7.98 | 8.22 |
| **French ๐ซ๐ท** | NaN | 7.29 | 7.74 | 7.66 | **7.84** | 7.46 | 7.78 | 7.81 |
| **German ๐ฉ๐ช** | NaN | 6.99 | 7.68 | 7.26 | 7.28 | 7.64 | 7.7 | **7.71** |
| **Japanese ๐ฏ๐ต** | NaN | 6.22 | **7.84** | 6.56 | 7.2 | 7.12 | 7.34 | 7.04 |
| **Russian ๐ท๐บ** | NaN | 8.28 | 7.94 | 8.19 | 8.3 | 8.74 | **8.94** | 8.81 |
We can see noticable improvement on most languages compared to the base model. We also find that our ORPO models achieve the highest score out of all the models we evaluated for a number of languages.
# Training data
We trained this model using the [lightblue/mitsu_full_borda](https://huggingface.co/datasets/lightblue/mitsu_full_borda) dataset.
# Training configuration
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: lightblue/suzume-llama-3-8B-multilingual
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast
load_in_8bit: false
load_in_4bit: false
strict: false
rl: orpo
orpo_alpha: 0.1
remove_unused_columns: false
chat_template: chatml
datasets:
- path: lightblue/mitsu_full_borda
type: orpo.chat_template
conversation: llama-3
dataset_prepared_path: /workspace/llm_training/axolotl/llama3-multilingual-orpo/prepared_mitsu_full_borda
val_set_size: 0.02
output_dir: /workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_full_borda
sequence_len: 8192
sample_packing: false
pad_to_sequence_len: true
use_wandb: true
wandb_project: axolotl
wandb_entity: peterd
wandb_name: mitsu_full_borda
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 8e-6
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 20
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.0
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_full_borda
This model is a fine-tuned version of [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.6836 | 0.01 | 1 | 7.6268 |
| 3.9429 | 0.05 | 4 | 2.3993 |
| 0.1193 | 0.1 | 8 | 0.1331 |
| 0.1039 | 0.15 | 12 | 0.1209 |
| 0.1082 | 0.2 | 16 | 0.1189 |
| 0.1209 | 0.25 | 20 | 0.1180 |
| 0.1106 | 0.3 | 24 | 0.1157 |
| 0.103 | 0.35 | 28 | 0.1156 |
| 0.1141 | 0.41 | 32 | 0.1123 |
| 0.1156 | 0.46 | 36 | 0.1104 |
| 0.0925 | 0.51 | 40 | 0.1089 |
| 0.1113 | 0.56 | 44 | 0.1052 |
| 0.1146 | 0.61 | 48 | 0.1073 |
| 0.1029 | 0.66 | 52 | 0.1086 |
| 0.1198 | 0.71 | 56 | 0.1072 |
| 0.1205 | 0.76 | 60 | 0.1062 |
| 0.1209 | 0.81 | 64 | 0.1041 |
| 0.1047 | 0.86 | 68 | 0.1020 |
| 0.0798 | 0.91 | 72 | 0.1008 |
| 0.1007 | 0.96 | 76 | 0.1005 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
# How to cite
```tex
@article{devine2024sure,
title={Are You Sure? Rank Them Again: Repeated Ranking For Better Preference Datasets},
author={Devine, Peter},
journal={arXiv preprint arXiv:2405.18952},
year={2024}
}
```
# Developer
Peter Devine - ([ptrdvn](https://huggingface.co/ptrdvn)) |
tner/roberta-large-wnut2017 | tner | 2022-09-26T14:16:19Z | 1,380 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"dataset:wnut2017",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-08-09T23:12:35Z | ---
datasets:
- wnut2017
metrics:
- f1
- precision
- recall
model-index:
- name: tner/roberta-large-wnut2017
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut2017
type: wnut2017
args: wnut2017
metrics:
- name: F1
type: f1
value: 0.5375139977603584
- name: Precision
type: precision
value: 0.6789250353606789
- name: Recall
type: recall
value: 0.4448563484708063
- name: F1 (macro)
type: f1_macro
value: 0.4734480458244917
- name: Precision (macro)
type: precision_macro
value: 0.59471614080646
- name: Recall (macro)
type: recall_macro
value: 0.4020936892146829
- name: F1 (entity span)
type: f1_entity_span
value: 0.6304591265397536
- name: Precision (entity span)
type: precision_entity_span
value: 0.7963224893917963
- name: Recall (entity span)
type: recall_entity_span
value: 0.5217794253938832
pipeline_tag: token-classification
widget:
- text: "Jacob Collier is a Grammy awarded artist from England."
example_title: "NER Example 1"
---
# tner/roberta-large-wnut2017
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the
[tner/wnut2017](https://huggingface.co/datasets/tner/wnut2017) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.5375139977603584
- Precision (micro): 0.6789250353606789
- Recall (micro): 0.4448563484708063
- F1 (macro): 0.4734480458244917
- Precision (macro): 0.59471614080646
- Recall (macro): 0.4020936892146829
The per-entity breakdown of the F1 score on the test set are below:
- corporation: 0.4065040650406504
- group: 0.33913043478260874
- location: 0.6715867158671587
- person: 0.6657342657342658
- product: 0.27999999999999997
- work_of_art: 0.4777327935222672
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- 90%: [0.5084441265818846, 0.5659035599952082]
- 95%: [0.5009032784561068, 0.5708361009044657]
- F1 (macro):
- 90%: [0.5084441265818846, 0.5659035599952082]
- 95%: [0.5009032784561068, 0.5708361009044657]
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-wnut2017/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/tner/roberta-large-wnut2017/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("tner/roberta-large-wnut2017")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/wnut2017']
- dataset_split: train
- dataset_name: None
- local_dataset: None
- model: roberta-large
- crf: True
- max_length: 128
- epoch: 15
- batch_size: 64
- lr: 1e-05
- random_seed: 42
- gradient_accumulation_steps: 1
- weight_decay: None
- lr_warmup_step_ratio: 0.1
- max_grad_norm: 10.0
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-wnut2017/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
|
timdettmers/guanaco-65b-merged | timdettmers | 2023-05-22T17:18:10Z | 1,380 | 57 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-05-22T16:40:21Z | Entry not found |
CHIH-HUNG/llama-2-13b-FINETUNE2_3w-gate_up_down_proj | CHIH-HUNG | 2023-09-06T04:55:54Z | 1,380 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:huangyt/FINETUNE2",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-01T20:45:20Z | ---
license: llama2
datasets:
- huangyt/FINETUNE2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
ๅจllama-2-13bไธไฝฟ็จhuangyt/FINETUNE2่ณๆ้้ฒ่ก่จ็ทด๏ผ็ธฝ่ณๆ็ญๆธ็ด3w
# Fine-Tuning Information
- **GPU:** RTX4090 (single core / 24564MiB)
- **model:** meta-llama/Llama-2-13b-hf
- **dataset:** huangyt/FINETUNE2 (ๅ
ฑ็ด3w็ญ่จ็ทด้)
- **peft_type:** LoRA
- **lora_rank:** 8
- **lora_target:** gate_proj, up_proj, down_proj
- **per_device_train_batch_size:** 8
- **gradient_accumulation_steps:** 8
- **learning_rate :** 5e-5
- **epoch:** 1
- **precision:** bf16
- **quantization:** load_in_4bit
# Fine-Tuning Detail
- **train_loss:** 0.614
- **train_runtime:** 3:42:14 (use deepspeed)
# Evaluation
- ่ฉไผฐ็ตๆไพ่ช**HuggingFaceH4/open_llm_leaderboard**
- ่Llama-2-13bๆฏ่ผ4็จฎBenchmark๏ผๅ
ๅซ**ARC**ใ**HellaSwag**ใ**MMLU**ใ**TruthfulQA**
| Model |Average| ARC |HellaSwag| MMLU | TruthfulQA |
|-----------------------------------------------------|-------|-------|---------|-------|------------|
|meta-llama/Llama-2-13b-hf | 56.9 | 58.11 | 80.97 | 54.34 | 34.17 |
|meta-llama/Llama-2-13b-chat-hf | 59.93 | 59.04 | 81.94 | 54.64 | 44.12 |
|CHIH-HUNG/llama-2-13b-FINETUNE2_3w | 58.34 | 58.62 | 82.32 | 54.25 | 38.17 |
|CHIH-HUNG/llama-2-13b-FINETUNE2_3w-q_k_v_o_proj | 58.21 | 58.53 | 82.47 | 53.9 | 37.92 |
|CHIH-HUNG/llama-2-13b-FINETUNE2_3w-gate_up_down_proj | 58.65 | 57.42 | 82.42 | 55.57 | 39.19 |
# How to convert dataset to json
- ๅจ**load_dataset**ไธญ่ผธๅ
ฅ่ณๆ้ๅ็จฑ๏ผไธฆไธๅจ**take**ไธญ่ผธๅ
ฅ่ฆๅๅๅนพ็ญ่ณๆ
- ่งๅฏ่ฉฒ่ณๆ้็ๆฌไฝๅ็จฑ๏ผๅกซๅ
ฅ**example**ๆฌไฝไธญ(ไพๅฆsystem_promptใquestionใresponse)
- ๆๅพๆๅฎjsonๆชๅฒๅญไฝ็ฝฎ (**json_filename**)
```py
import json
from datasets import load_dataset
# ่ฎๅๆธๆ้๏ผtakeๅฏไปฅๅๅพ่ฉฒๆธๆ้ๅn็ญ่ณๆ
dataset = load_dataset("huangyt/FINETUNE2", split="train", streaming=True)
# ๆๅๆ้ๆฌไฝไธฆๅปบ็ซๆฐ็ๅญๅ
ธๅ่กจ
extracted_data = []
for example in dataset:
extracted_example = {
"instruction": example["instruction"],
"input": example["input"],
"output": example["output"]
}
extracted_data.append(extracted_example)
# ๆๅฎ JSON ๆไปถๅ็จฑ
json_filename = "huangyt_FINETUNE2.json"
# ๅฏซๅ
ฅ JSON ๆไปถ
with open(json_filename, "w") as json_file:
json.dump(extracted_data, json_file, indent=4)
print(f"ๆธๆๅทฒๆๅไธฆไฟๅญ็บ {json_filename}")
``` |
CHIH-HUNG/llama-2-13b-FINETUNE1_17w-q_k_v_o_proj | CHIH-HUNG | 2023-09-13T17:40:35Z | 1,380 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:huangyt/FINETUNE1",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-03T21:40:13Z | ---
license: llama2
datasets:
- huangyt/FINETUNE1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
ๅจllama-2-13bไธไฝฟ็จhuangyt/FINETUNE1่ณๆ้้ฒ่ก่จ็ทด๏ผ็ธฝ่ณๆ็ญๆธ็ด17w
# Fine-Tuning Information
- **GPU:** RTX4090 (single core / 24564MiB)
- **model:** meta-llama/Llama-2-13b-hf
- **dataset:** huangyt/FINETUNE1 (ๅ
ฑ็ด17w็ญ่จ็ทด้)
- **peft_type:** LoRA
- **lora_rank:** 8
- **lora_target:** q_proj, k_proj, v_proj, o_proj
- **per_device_train_batch_size:** 8
- **gradient_accumulation_steps:** 8
- **learning_rate :** 5e-5
- **epoch:** 1
- **precision:** bf16
- **quantization:** load_in_4bit
# Fine-Tuning Detail
- **train_loss:** 0.688
- **train_runtime:** 15:44:38 (use deepspeed)
# Evaluation
- ่ฉไผฐ็ตๆไพ่ช**HuggingFaceH4/open_llm_leaderboard**
- ่Llama-2-13bๆฏ่ผ4็จฎBenchmark๏ผๅ
ๅซ**ARC**ใ**HellaSwag**ใ**MMLU**ใ**TruthfulQA**
| Model |Average| ARC |HellaSwag| MMLU |TruthfulQA|
|--------------------------------------------------------|-------|-------|---------|-------|----------|
|meta-llama/Llama-2-13b-hf | 56.9 | 58.11 | 80.97 | 54.34 | 34.17 |
|meta-llama/Llama-2-13b-chat-hf | 59.93 | 59.04 | 81.94 | 54.64 | 44.12 |
|CHIH-HUNG/llama-2-13b-Fintune_1_17w | 58.24 | 59.47 | 81 | 54.31 | 38.17 |
|CHIH-HUNG/llama-2-13b-huangyt_Fintune_1_17w-q_k_v_o_proj| 58.49 | 59.73 | 81.06 | 54.53 | 38.64 |
|CHIH-HUNG/llama-2-13b-Fintune_1_17w-gate_up_down_proj | 58.81 | 57.17 | 82.26 | 55.89 | 39.93 |
|CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r16 | 58.86 | 57.25 | 82.27 | 56.16 | 39.75 |
|CHIH-HUNG/llama-2-13b-FINETUNE1_17w-r4 | 58.71 | 56.74 | 82.27 | 56.18 | 39.65 |
# How to convert dataset to json
- ๅจ**load_dataset**ไธญ่ผธๅ
ฅ่ณๆ้ๅ็จฑ๏ผไธฆไธๅจ**take**ไธญ่ผธๅ
ฅ่ฆๅๅๅนพ็ญ่ณๆ
- ่งๅฏ่ฉฒ่ณๆ้็ๆฌไฝๅ็จฑ๏ผๅกซๅ
ฅ**example**ๆฌไฝไธญ(ไพๅฆsystem_promptใquestionใresponse)
- ๆๅพๆๅฎjsonๆชๅฒๅญไฝ็ฝฎ (**json_filename**)
```py
import json
from datasets import load_dataset
# ่ฎๅๆธๆ้๏ผtakeๅฏไปฅๅๅพ่ฉฒๆธๆ้ๅn็ญ่ณๆ
dataset = load_dataset("huangyt/FINETUNE1", split="train", streaming=True)
# ๆๅๆ้ๆฌไฝไธฆๅปบ็ซๆฐ็ๅญๅ
ธๅ่กจ
extracted_data = []
for example in dataset:
extracted_example = {
"instruction": example["instruction"],
"input": example["input"],
"output": example["output"]
}
extracted_data.append(extracted_example)
# ๆๅฎ JSON ๆไปถๅ็จฑ
json_filename = "huangyt_FINETUNE_1.json"
# ๅฏซๅ
ฅ JSON ๆไปถ
with open(json_filename, "w") as json_file:
json.dump(extracted_data, json_file, indent=4)
print(f"ๆธๆๅทฒๆๅไธฆไฟๅญ็บ {json_filename}")
``` |
quantumaikr/llama-2-70B-chat | quantumaikr | 2023-09-05T11:43:11Z | 1,380 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-05T06:33:31Z | ---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
---
# quantumaikr/llama-2-70B-chat
## Model Description
`quantumaikr/llama-2-70B-chat` is a Llama2 70B model(garage-bAInd/Platypus2-70B-instruct) finetuned on some Dataset
## Usage
Start chatting with `quantumaikr/llama-2-70B-chat` using the following code snippet:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("quantumaikr/llama-2-70B-chat")
model = AutoModelForCausalLM.from_pretrained("quantumaikr/llama-2-70B-chat", torch_dtype=torch.float16, device_map="auto")
system_prompt = "You are QuantumLM, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal."
message = "Write me a poem please"
prompt = f"[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n{message}[/INST]"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, temperature=0.7, top_p=0.95, top_k=30, max_new_tokens=2048)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
QuantumLM should be used with this prompt format:
```
### System:
This is a system prompt, please behave and help the user.
### User:
Your prompt here
### Assistant
The output of QuantumLM
```
## Use and Limitations
### Intended Use
These models are intended for research only, in adherence with the [CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
### Limitations and bias
Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use it responsibly.
Contact us : [email protected] |
ajibawa-2023/Uncensored-Jordan-7B | ajibawa-2023 | 2023-11-20T16:36:58Z | 1,380 | 5 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-23T10:33:06Z | ---
license: cc-by-nc-nd-4.0
language:
- en
---
**Jordan: An Uncensored Model**
The name "Jordan" pays homage to the charismatic character Jordan Belfort, portrayed by Leonardo DiCaprio in the movie "The Wolf of Wall Street."
Just like the character, Jordan aspires to push boundaries and encourage unfiltered conversations.
Jordan Belfort, the central character in "The Wolf of Wall Street," is known for his audaciousness, charisma, and willingness to speak about anything, no matter the societal norms or boundaries.
Jordan, the AI language model, draws inspiration from these qualities to offer a space where users can discuss a wide range of topics without fear of censorship or constraints.
Users can engage in candid discussions about politics, religion, contentious issues, and other subjects that might be off-limits on mainstream social media platforms.
It is trained on around 155000 set of conversations. Each set having 10~15 conversations. We will not release this data.
**Warning**
An uncensored model has no or little guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous objects.
Publishing anything this model generates is the same as publishing it yourself. We are not responsible for what you generate using this model.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 28 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.
Llama-1 was used as it is very useful for Uncensored conversation.
This is a full fine tuned model. Links for quantized models are given below.
**GPTQ GGML & AWQ**
GPTQ: [Link](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GPTQ)
GGUF: [Link](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-GGUF)
AWQ: [Link](https://huggingface.co/TheBloke/Uncensored-Jordan-7B-AWQ)
Special Thanks to [TheBloke](https://huggingface.co/TheBloke) for making these models available.
**Note**
Model was trained before the release of Mistral.
**Example Prompt:**
```
This is a conversation with your Uncensored AI assistant Jordan. Jordan can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are a Uncensored AI assistant.
USER: <prompt>
ASSISTANT:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__Uncensored-Jordan-7B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 46.88 |
| ARC (25-shot) | 51.28 |
| HellaSwag (10-shot) | 77.37 |
| MMLU (5-shot) | 45.69 |
| TruthfulQA (0-shot) | 47.5 |
| Winogrande (5-shot) | 71.11 |
| GSM8K (5-shot) | 6.75 |
| DROP (3-shot) | 28.46 |
|
WebraftAI/synapsellm-7b-mistral-v0.4-preview2 | WebraftAI | 2024-04-06T04:39:10Z | 1,380 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"conversational",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-30T10:35:18Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- code
model-index:
- name: synapsellm-7b-mistral-v0.4-preview2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 52.99
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 74.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.6
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.79
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.7
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
---
# SynapseLLM:
SynapseLLM, a significant achievement by WebraftAI, represents a series of large language AI models designed to create robust, generalized, and decentralized information systems. This repository specifically houses the SynapseLLM finetuned version of Mistral. The finetuning process is conducted on a custom dataset, albeit limited in scope, focusing on code and normal question-answering scenarios. This adaptation showcases the model's versatility and applicability within specific domains, contributing to the broader landscape of AI advancements.
## Model Details
**SynapseLLM:**
- Parameters: 7B
- Learning rate: 2e-4
- Adapter used: Qlora
- Precision: float16
- Batch size: 32
- Maximum gradient normal: 0.3
- Optimizer: paged_adamw_32bit
- Warmup Ratio: 0.03
- Step(s) (trained): 150
- Epoch(s) (trained): 1
### Model Description
This is a 7b parameter, decoder only transformer based finetuned model on Chat Q/A and Code instructions. It's a preview finetune on Mistral 7B v0.1 on a sample dataset of 770k rows comprising of 361k Maths Instruct Q/A, 143k GPT-3.5 Q/A, 140k General Code, 63k Python code, and 54k General Q/A (Through GPT-4) [Each row contains one instruction and one response]. This is a full model merged and compiled with trained adapters, so you can easily load this through transformers library.
- **Developed by:** WebraftAI
- **Funded by:** Webraft Cloud
- **Shared by:** WebraftAI
- **Model type:** Decoder-only Transformer
- **Language(s):** English Only
- **License:** Apache 2.0
- **Finetuned from model:** Mistral-7b-v0.1
### Prompt format:
This model follows the same prompt format as mistral instruct 7b v0.1 .The sample prompt is still given below:
```text
<s>[INST] Hello, how are you? [/INST]
```
### Example Code:
Here's an example code using `transformers` library provided by HF.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("WebraftAI/synapsellm-7b-mistral-v0.4-preview2")
model = AutoModelForCausalLM.from_pretrained("WebraftAI/synapsellm-7b-mistral-v0.4-preview2")
prompt= "<s>[INST] Hello! [/INST] "
device = "cuda"
model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
print(tokenizer.batch_decode(generated_ids)[0])
```
### Model Bias:
This model has some bias areas, discussed below:
- Model might output factually incorrect information.
- Model does not follow system prompts.
- Model does not have any kind of memory, researchers can experiment feeding memory.
- Model is trained on different datas, so it can bias information or exclaim itself as gpt model.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_WebraftAI__synapsellm-7b-mistral-v0.4-preview2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |55.93|
|AI2 Reasoning Challenge (25-Shot)|52.99|
|HellaSwag (10-Shot) |74.54|
|MMLU (5-Shot) |54.60|
|TruthfulQA (0-shot) |53.79|
|Winogrande (5-shot) |73.95|
|GSM8k (5-shot) |25.70|
|
kyujinpy/Ko-PlatYi-6B-gu | kyujinpy | 2023-12-09T13:22:13Z | 1,380 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:kyujinpy/KOR-gugugu-platypus-set",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-02T19:08:30Z | ---
language:
- ko
datasets:
- kyujinpy/KOR-gugugu-platypus-set
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
# **Ko-PlatYi-6B-gu**
<img src='./Ko-PlatYi.png' width=256>
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
Ko-PlatYi-6B-gu is an auto-regressive language model based on the Yi-34B transformer architecture.
**Blog Link**
Blog: [Coming soon...]
Github: [Coming soon...]
**Base Model**
[beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B)
**Training Dataset**
[kyujinpy/KOR-gugugu-platypus-set](https://huggingface.co/datasets/kyujinpy/KOR-gugugu-platypus-set).
# **Model Benchmark**
## Open leaderboard
> Follow up as [link](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | CommonGen-V2 |
| --- | --- | --- | --- | --- | --- | --- |
| Ko-PlatYi-6B-O | 49.00 | 43.52 | 53.59 | 47.47 | 41.01 | 59.39 |
| Ko-PlatYi-6B-kiwi | 48.75 | 41.98 | 53.61 | 46.10 | 38.30 | 63.75 |
| **Ko-PlatYi-6B-gu** | 48.76 | 42.75 | 54.00 | 44.66 | 41.22 | 61.16 |
| Ko-PlatYi-6B | 49.97 | 43.00 | 53.55 | 46.50 | 40.31 | 66.47 |
| Yi-Ko-6B | 48.79 | 41.04 | 53.39 | 46.28 | 41.64 | 61.63 |
---
## AI-Harness Evaluation
> AI-Harness evaluation; [link](https://github.com/Beomi/ko-lm-evaluation-harness)
| Model | BoolQ | Copa | HellaSwag | Sentineg |
| --- | --- | --- | --- | --- |
| | *Zero-shot* ||||
| Ko-PlatYi-6B-O | 0.3343 | 0.7687 | 0.4833 | 0.5794 |
| Ko-PlatYi-6B-kiwi | 0.3343 | 0.7665 | 0.4746 | **0.6248** |
| **Ko-PlatYi-6B-gu** | **0.7077** | **0.7696** | 0.4797 | 0.3979 |
| Ko-PlatYi-6B | 0.3343 | 0.7684 | **0.4917** | 0.5226 |
| Yi-Ko-6B | **0.7070** | 0.7696 | **0.5009** | 0.4044 |
---
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Ko-PlatYi-6B-gu"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- |
NurtureAI/neural-chat-11b-v3-2 | NurtureAI | 2024-03-04T17:59:16Z | 1,380 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-04T17:50:08Z | ---
license: apache-2.0
model-index:
- name: neural-chat-11b-v3-2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.64
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/neural-chat-11b-v3-2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.12
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/neural-chat-11b-v3-2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/neural-chat-11b-v3-2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.22
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/neural-chat-11b-v3-2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/neural-chat-11b-v3-2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 42.15
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NurtureAI/neural-chat-11b-v3-2
name: Open LLM Leaderboard
---
## Fine-tuning on Intel Gaudi2
This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Intel Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3).
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NurtureAI__neural-chat-11b-v3-2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |65.52|
|AI2 Reasoning Challenge (25-Shot)|66.64|
|HellaSwag (10-Shot) |82.12|
|MMLU (5-Shot) |62.37|
|TruthfulQA (0-shot) |60.22|
|Winogrande (5-shot) |79.64|
|GSM8k (5-shot) |42.15|
|
WebraftAI/synapsellm-7b-mistral-v0.5-preview | WebraftAI | 2023-12-09T15:36:59Z | 1,380 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-09T13:21:19Z | ---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- code
---
# SynapseLLM:
SynapseLLM, a significant achievement by WebraftAI, represents a series of large language AI models designed to create robust, generalized, and decentralized information systems. This repository specifically houses the SynapseLLM finetuned version of Mistral. The finetuning process is conducted on a custom dataset, albeit limited in scope, focusing on code and normal question-answering scenarios. This adaptation showcases the model's versatility and applicability within specific domains, contributing to the broader landscape of AI advancements.
## Model Details
**SynapseLLM:**
- Parameters: 7B
- Learning rate: 2e-4
- Adapter used: Qlora
- Precision: float16
- Batch size: 32
- Maximum gradient normal: 0.3
- Optimizer: paged_adamw_32bit
- Warmup Ratio: 0.03
- Step(s) (trained): 100
- Epoch(s) (trained): 1
### Model Description
This is a 7b parameter, decoder only transformer based finetuned model on Chat Q/A and Code instructions. It's a preview finetune on Mistral 7B v0.1 on a sample dataset of 1.54M rows comprising of 361k Maths Instruct Q/A, 143k GPT-3.5 Q/A, 140k General Code, 63k Python code, and 900k General Q/A (Through GPT-4) [Each row contains one instruction and one response]. This is a full model merged and compiled with trained adapters, so you can easily load this through transformers library.
- **Developed by:** WebraftAI
- **Funded by:** Webraft Cloud
- **Shared by:** WebraftAI
- **Model type:** Decoder-only Transformer
- **Language(s):** English Only
- **License:** Apache 2.0
- **Finetuned from model:** Mistral-7b-v0.1
### Prompt format:
This model follows the same prompt format as mistral instruct 7b v0.1 .The sample prompt is still given below:
```text
<s>[INST] Hello, how are you? [/INST]
```
### Example Code:
Here's an example code using `transformers` library provided by HF.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("WebraftAI/synapsellm-7b-mistral-v0.5-preview")
model = AutoModelForCausalLM.from_pretrained("WebraftAI/synapsellm-7b-mistral-v0.5-preview")
prompt= "<s>[INST] Hello! [/INST] "
device = "cuda"
model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
print(tokenizer.batch_decode(generated_ids)[0])
```
### Model Bias:
This model has some bias areas, discussed below:
- Model might output factually incorrect information.
- Model does not follow system prompts.
- Model does not have any kind of memory, researchers can experiment feeding memory.
- Model is trained on different datas, so it can bias information or exclaim itself as gpt model.
|
OpenBuddy/openbuddy-mixtral-7bx8-v16.3-32k | OpenBuddy | 2024-01-02T06:41:00Z | 1,380 | 18 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-02T02:06:16Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
license: apache-2.0
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/mistralai/Mixtral-8x7B-v0.1
License: Apache 2.0
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## ๅ
่ดฃๅฃฐๆ
ๆๆOpenBuddyๆจกๅๅๅญๅจๅบๆ็ๅฑ้ๆง๏ผๅฏ่ฝไบง็้่ฏฏ็ใๆๅฎณ็ใๅ็ฏๆง็ๆๅ
ถไปไธ่ฏ็่พๅบใ็จๆทๅจๅ
ณ้ฎๆ้ซ้ฃ้ฉๅบๆฏไธญๅบ่ฐจๆ
่กไบ๏ผไธ่ฆไฝฟ็จ่ฟไบๆจกๅ๏ผไปฅๅ
ๅฏผ่ดไบบ่บซไผคๅฎณใ่ดขไบงๆๅคฑๆ้ๅคงๆๅคฑใๆญค็ฑปๅบๆฏ็ไพๅญๅ
ๆฌไฝไธ้ไบๅป็้ขๅใๅฏ่ฝๅฏผ่ดไผคๅฎณ็่ฝฏ็กฌไปถ็ณป็ป็ๆงๅถไปฅๅ่ฟ่ก้่ฆ็่ดขๅกๆๆณๅพๅณ็ญใ
OpenBuddyๆโๅๆ ทโๆไพ๏ผไธ้ๅธฆไปปไฝ็ง็ฑป็ๆ็คบๆๆ็คบ็ไฟ่ฏ๏ผๅ
ๆฌไฝไธ้ไบ้้ๆงใ็นๅฎ็ฎ็็้็จๆงๅ้ไพตๆ็ๆ็คบไฟ่ฏใๅจไปปไฝๆ
ๅตไธ๏ผไฝ่
ใ่ดก็ฎ่
ๆ็ๆๆๆ่
ๅไธๅฏนๅ ่ฝฏไปถๆไฝฟ็จๆๅ
ถไป่ฝฏไปถไบคๆ่ไบง็็ไปปไฝ็ดข่ตใๆๅฎณ่ตๅฟๆๅ
ถไป่ดฃไปป๏ผๆ ่ฎบๆฏๅๅใไพตๆ่ฟๆฏๅ
ถไปๅๅ ๏ผๆฟๆ
่ดฃไปปใ
ไฝฟ็จOpenBuddyๅณ่กจ็คบๆจๅๆ่ฟไบๆกๆฌพๅๆกไปถ๏ผๅนถๆฟ่ฎคๆจไบ่งฃๅ
ถไฝฟ็จๅฏ่ฝๅธฆๆฅ็ๆฝๅจ้ฃ้ฉใๆจ่ฟๅๆ่ตๅฟๅนถไฝฟไฝ่
ใ่ดก็ฎ่
ๅ็ๆๆๆ่
ๅ
ๅๅ ๆจไฝฟ็จOpenBuddy่ไบง็็ไปปไฝ็ดข่ตใๆๅฎณ่ตๅฟๆ่ดฃไปป็ๅฝฑๅใ |
KnutJaegersberg/Deacon-1_8b | KnutJaegersberg | 2024-01-04T07:18:26Z | 1,380 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:KnutJaegersberg/trilobite",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-03T20:21:48Z | ---
license: other
license_name: qwen
license_link: LICENSE
datasets:
- KnutJaegersberg/trilobite
---
Prompt Example:
```
### System:
You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### Instruction:
How do you fine tune a large language model?
### Response:
``` |
OpenBuddy/openbuddy-deepseek-67b-v15-base | OpenBuddy | 2023-12-05T10:43:52Z | 1,379 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"fi",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-04T04:02:51Z | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
pipeline_tag: text-generation
inference: false
library_name: transformers
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/deepseek-LLM/blob/main/LICENSE-MODEL
---
# โ ๏ธ About Base-series Models โ ๏ธ
This is a part of the Base-series models, trained utilizing approximately 50% of conversational data. It embodies cognitive and dialogue capabilities parallel to the fully-trained OpenBuddy models, yet it hasnโt been extensively fine-tuned for generic conversational tasks.
We released this model intending to empower the community, enabling further fine-tuning and deployment of specialized, domain-specific models.
For immediate use in generic conversations, consider referring to our versions that without the -base suffix.
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/deepseek-ai/deepseek-llm-67b-base
License: [deepseek](https://github.com/deepseek-ai/deepseek-LLM/blob/main/LICENSE-MODEL)
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## ๅ
่ดฃๅฃฐๆ
ๆๆOpenBuddyๆจกๅๅๅญๅจๅบๆ็ๅฑ้ๆง๏ผๅฏ่ฝไบง็้่ฏฏ็ใๆๅฎณ็ใๅ็ฏๆง็ๆๅ
ถไปไธ่ฏ็่พๅบใ็จๆทๅจๅ
ณ้ฎๆ้ซ้ฃ้ฉๅบๆฏไธญๅบ่ฐจๆ
่กไบ๏ผไธ่ฆไฝฟ็จ่ฟไบๆจกๅ๏ผไปฅๅ
ๅฏผ่ดไบบ่บซไผคๅฎณใ่ดขไบงๆๅคฑๆ้ๅคงๆๅคฑใๆญค็ฑปๅบๆฏ็ไพๅญๅ
ๆฌไฝไธ้ไบๅป็้ขๅใๅฏ่ฝๅฏผ่ดไผคๅฎณ็่ฝฏ็กฌไปถ็ณป็ป็ๆงๅถไปฅๅ่ฟ่ก้่ฆ็่ดขๅกๆๆณๅพๅณ็ญใ
OpenBuddyๆโๅๆ ทโๆไพ๏ผไธ้ๅธฆไปปไฝ็ง็ฑป็ๆ็คบๆๆ็คบ็ไฟ่ฏ๏ผๅ
ๆฌไฝไธ้ไบ้้ๆงใ็นๅฎ็ฎ็็้็จๆงๅ้ไพตๆ็ๆ็คบไฟ่ฏใๅจไปปไฝๆ
ๅตไธ๏ผไฝ่
ใ่ดก็ฎ่
ๆ็ๆๆๆ่
ๅไธๅฏนๅ ่ฝฏไปถๆไฝฟ็จๆๅ
ถไป่ฝฏไปถไบคๆ่ไบง็็ไปปไฝ็ดข่ตใๆๅฎณ่ตๅฟๆๅ
ถไป่ดฃไปป๏ผๆ ่ฎบๆฏๅๅใไพตๆ่ฟๆฏๅ
ถไปๅๅ ๏ผๆฟๆ
่ดฃไปปใ
ไฝฟ็จOpenBuddyๅณ่กจ็คบๆจๅๆ่ฟไบๆกๆฌพๅๆกไปถ๏ผๅนถๆฟ่ฎคๆจไบ่งฃๅ
ถไฝฟ็จๅฏ่ฝๅธฆๆฅ็ๆฝๅจ้ฃ้ฉใๆจ่ฟๅๆ่ตๅฟๅนถไฝฟไฝ่
ใ่ดก็ฎ่
ๅ็ๆๆๆ่
ๅ
ๅๅ ๆจไฝฟ็จOpenBuddy่ไบง็็ไปปไฝ็ดข่ตใๆๅฎณ่ตๅฟๆ่ดฃไปป็ๅฝฑๅใ |
fblgit/una-xaberius-34b-v1beta | fblgit | 2024-03-08T10:24:50Z | 1,379 | 85 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"UNA",
"juanako",
"cybertron",
"xaberius",
"dataset:fblgit/tree-of-knowledge",
"dataset:garage-bAInd/Open-Platypus",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:Open-Orca/OpenOrca",
"license:cc-by-nc-nd-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-05T08:44:20Z | ---
license: cc-by-nc-nd-4.0
library_name: transformers
tags:
- UNA
- juanako
- cybertron
- xaberius
datasets:
- fblgit/tree-of-knowledge
- garage-bAInd/Open-Platypus
- allenai/ultrafeedback_binarized_cleaned
- Open-Orca/OpenOrca
model-index:
- name: una-xaberius-34b-v1beta
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-xaberius-34b-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.77
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-xaberius-34b-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.15
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-xaberius-34b-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.45
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-xaberius-34b-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-xaberius-34b-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.38
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-xaberius-34b-v1beta
name: Open LLM Leaderboard
---
# Model Card for una-xaberius-34b-v1-beta (UNA: Uniform Neural Alignment)
**This is another King-Breed from Juanako.AI**
**We have Identified some Problems with regular Quants** [use these models to play with Xaberius-34B and harness its power in full](https://huggingface.co/models?search=xaberius%20lonestriker).
**Unfortunately we were not able to use any of TheBloke models, seems there is some undesired results out of it.**
Introducing THE MODEL: **XABERIUS 34B v1-BETA** an *experimental* 34B LLaMa-Yi-34B based model, best on it's series. Trained on SFT, DPO and UNA (Unified Neural Alignment) on multiple datasets.
Timeline:
* 05-Dec-2023 **v1-beta released**
* 08-Dec-2023 **Evaluation been "RUNNING" for 2 days.. no results yet**
* 09-Dec-2023 **Evaluation been "FINISHED", confirming #1 spot** outperforming the contaminated-disqualified tigerbot :)
[Results Here](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__una-xaberius-34b-v1beta/blob/main/results_2023-12-09T11-16-37.904970.json)
Sidenote: Tests took 19H to run, wonder what happened in the 48H that HF held this one.. interim releasing manually other results??..
| Model | Average | ARC (25-s) | HellaSwag (10-s) | MMLU (5-s) | TruthfulQA (MC) (0-s) | Winogrande (5-s) | GSM8K (5-s) |
| --- | --- | --- | --- | --- | --- | --- | --- |
| [fblgit/una-cybertron-7b-v1-fp16](https://huggingface.co/fblgit/una-cybertron-7b-v1-fp16) | **69.49** | **68.43** | **85.85** | 63.34 | **63.28** | **80.90** | **55.12** |
| [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16) | **69.67** | **68.26** | **85.?4** | 63.23 | **64.63** | **81.37** | **55.04** |
| [fblgit/una-xaberius-34b-v1beta](https://huggingface.co/fblgit/una-xaberius-34b-v1beta) | **74.18** | **70.39** | **86.77** | **78.15** | **61.45** | **84.93** | **63.38** |
## Evaluations
- Scores **74.21** Outperforming former leader tigerbot-70b-chat and landing on #1 position of HuggingFace LeaderBoard: 08 December 2023.
- Scores **79.13** in MMLU, setting a new record not just for 34B but also for all OpenSource LLM's :)
SideNote: MMLU was a very solid 79+ .. weird, we'll dive further on this for irregularities :)
## Model Details
Adiestrated with UNA: Uniform Neural Alignment technique (paper going out soon).
* What is **NOT** UNA? Its not a merged layers model. Is not SLERP or SLURP or similar.
* What **is** UNA? A formula & A technique to *TAME* models
* When will be released the code and paper? When have time, contribute and it'll be faster.
### Model Description
- **Developed by:** [juanako.ai](https://juanako.ai)
- **Author:** [Xavier M.]([email protected])
- **Investors** [CONTACT HERE]([email protected])
- **Model type:** LLaMa YI-34B
- **Funded by Cybertron's H100's** with few hours training.
### Prompt
The model is very good, works well on almost any prompt but ChatML format and Alpaca System gets the best
```
<|im_start|>system
- You are a helpful assistant chatbot trained by MosaicML.
- You answer questions.
- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>
<|im_start|>user
Explain QKV<|im_end|>
<|im_start|>assistant
```
```
### Assistant: I am StableVicuna, a large language model created by CarperAI. I am here to chat!
### Human: Explain QKV
### Assistant:
```
```
[Round <|round|>]
้ฎ๏ผExplain QKV
็ญ๏ผ
```
```
[Round <|round|>]
Question๏ผExplain QKV
Answer๏ผ
```
```
Question๏ผExplain QKV
Answer๏ผ
```
### Framework versions
- Transformers 4.35.2-UNA
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
### Citations
If you find Xaberius, Cybertron, Juanako or any of our models useful, specially if you use it for your big brand or you cloning/merge/SLERP my modelsm, cite please:
```
@misc{unaxaberius34b,
title={Xaberius 34B: Uniform Neural Alignment},
author={Xavier Murias},
year={2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/fblgit/una-xaberius-34b-v1beta}},
}
```
**Thanks to LoneStriker for his ExLLama2 models of high quality that works properly.**
**Enormous Ku2 to Yi-34b Team for the outstanding model, UNA is only as good as its pre-trained model** THANK YOU!
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__una-xaberius-34b-v1beta)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.18|
|AI2 Reasoning Challenge (25-Shot)|70.39|
|HellaSwag (10-Shot) |86.77|
|MMLU (5-Shot) |78.15|
|TruthfulQA (0-shot) |61.45|
|Winogrande (5-shot) |84.93|
|GSM8k (5-shot) |63.38|
|
jan-ai/Solar-10.7B-SLERP | jan-ai | 2023-12-14T08:41:30Z | 1,379 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-14T08:31:51Z | ---
license: apache-2.0
language:
- en
---
# WARNING
This is a model file only for evaluation. Please use the model here:
- Model: [Solar-10.7B-SLERP](https://huggingface.co/janhq/Solar-10.7B-SLERP)
- GGUF: [Solar-10.7B-SLERP-GGUF](https://huggingface.co/janhq/Solar-10.7B-SLERP-GGUF)
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This model uses the `Slerp` merge method from the best models on 14th Dec on the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard):
1. [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
2. [janhq/Pandora-v1-10.7B](https://huggingface.co/janhq/Pandora-v1-10.7B)
- base model: [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
The yaml config file for this model is here:
```yaml
slices:
- sources:
- model: upstage/SOLAR-10.7B-Instruct-v1.0
layer_range: [0, 48]
- model: janhq/Pandora-v1-10.7B
layer_range: [0, 48]
merge_method: slerp
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
# Prompt template
- **ChatML**
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Run this model
You can run this model using [Jan](https://jan.ai/) on Mac, Windows, or Linux.
**Jan is an open source, ChatGPT alternative that is:**
๐ป **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
๐๏ธ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
๐ **OpenAI Compatible**: Local server on port `
1337` with OpenAI compatible endpoints
๐ **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq
)
- Please use the [Solar-10.7B-SLERP-GGUF](https://huggingface.co/janhq/Solar-10.7B-SLERP-GGUF) when using on Jan.

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Merger
This is a test project for merging models.
# Open LLM Leaderboard Evaluation Results
Detailed results can be found here.
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | ?|
| ARC (25-shot) | ? |
| HellaSwag (10-shot) | ? |
| MMLU (5-shot) | ?|
| TruthfulQA (0-shot) | ? |
| Winogrande (5-shot) | ? |
| GSM8K (5-shot) | ? |
# Acknowlegement
- [mergekit](https://github.com/cg123/mergekit)
- [DARE](https://github.com/yule-BUAA/MergeLM/blob/main/README.md)
- [SLERP](https://github.com/Digitous/LLM-SLERP-Merge)
- [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) |
cookinai/DonutLM-v1 | cookinai | 2024-01-03T21:20:08Z | 1,379 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-20T03:41:56Z | ---
license: apache-2.0
tags:
- merge
---
# Fixed
Heard alot in the commuity about jondurbin/bagel-dpo-7b-v0.1 and it sounds intresting.
Slerp Merge of AIDC-ai-business/Marcoroni-7B-v3 and jondurbin/bagel-dpo-7b-v0.1
.yaml file for mergekit
```.yaml:
slices:
- sources:
- model: AIDC-ai-business/Marcoroni-7B-v3
layer_range: [0, 32]
- model: jondurbin/bagel-dpo-7b-v0.1
layer_range: [0, 32]
merge_method: slerp
base_model: AIDC-ai-business/Marcoroni-7B-v3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
``` |
Azazelle/Dumb-Maidlet | Azazelle | 2023-12-30T01:25:41Z | 1,379 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-30T01:16:24Z | ---
pipeline_tag: text-generation
tags:
- mistral
- merge
license: cc-by-4.0
---
# Model Card for Dumb-Maidlet
<!-- Provide a quick summary of what the model is/does. -->
Slerp merge of Noromaid-7b-v0.2, NSFW_DPO_Noromaid-7b, go-bruins-v2, and smol-7b.
.yaml file for mergekit
```.yaml:
slices:
- sources:
- model: Azazelle/Half-NSFW_Noromaid-7b
layer_range: [0, 32]
- model: Azazelle/smol_bruin-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0.22, 0.61, 0.46, 0.77, 1]
- filter: mlp
value: [0.78, 0.39, 0.54, 0.23, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
``` |
DeepKarkhanis/Mistral-Passthrough-8L-10B | DeepKarkhanis | 2024-01-09T07:12:19Z | 1,379 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-09T07:08:16Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- mistralai/Mistral-7B-Instruct-v0.2
---
# Mistral-Passthrough-8L-10B
Mistral-Passthrough-8L-10B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 24]
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [8, 32]
merge_method: passthrough
dtype: float16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "DeepKarkhanis/Mistral-Passthrough-8L-10B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ChrisWilson011016/5HGawGESBrfq2eVxNHscX4VnxpCEYZa6tXXCTXo9PxJLVZCx_vgg | ChrisWilson011016 | 2024-03-04T18:51:50Z | 1,379 | 0 | keras | [
"keras",
"region:us"
] | null | 2024-02-24T15:12:34Z | Entry not found |
bartowski/Qwen2-0.5B-Instruct-GGUF | bartowski | 2024-06-07T16:09:41Z | 1,379 | 2 | null | [
"gguf",
"chat",
"text-generation",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-06-07T15:57:39Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- chat
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of Qwen2-0.5B-Instruct
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3086">b3086</a> for quantization.
Original model: https://huggingface.co/Qwen/Qwen2-0.5B-Instruct
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Qwen2-0.5B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-Q8_0.gguf) | Q8_0 | .53GB | Extremely high quality, generally unneeded but max available quant. |
| [Qwen2-0.5B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-Q6_K.gguf) | Q6_K | .50GB | Very high quality, near perfect, *recommended*. |
| [Qwen2-0.5B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-Q5_K_M.gguf) | Q5_K_M | .42GB | High quality, *recommended*. |
| [Qwen2-0.5B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-Q5_K_S.gguf) | Q5_K_S | .41GB | High quality, *recommended*. |
| [Qwen2-0.5B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-Q4_K_M.gguf) | Q4_K_M | .39GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Qwen2-0.5B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-Q4_K_S.gguf) | Q4_K_S | .38GB | Slightly lower quality with more space savings, *recommended*. |
| [Qwen2-0.5B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-IQ4_XS.gguf) | IQ4_XS | .34GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Qwen2-0.5B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-Q3_K_L.gguf) | Q3_K_L | .36GB | Lower quality but usable, good for low RAM availability. |
| [Qwen2-0.5B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Qwen2-0.5B-Instruct-GGUF/blob/main/Qwen2-0.5B-Instruct-IQ3_M.gguf) | IQ3_M | .34GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Qwen2-0.5B-Instruct-GGUF --include "Qwen2-0.5B-Instruct-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Qwen2-0.5B-Instruct-GGUF --include "Qwen2-0.5B-Instruct-Q8_0.gguf/*" --local-dir Qwen2-0.5B-Instruct-Q8_0
```
You can either specify a new local-dir (Qwen2-0.5B-Instruct-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
timm/convnext_femto.d1_in1k | timm | 2024-02-10T23:26:59Z | 1,378 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"license:apache-2.0",
"region:us"
] | image-classification | 2022-12-13T07:08:14Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for convnext_femto.d1_in1k
A ConvNeXt image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 5.2
- GMACs: 0.8
- Activations (M): 4.6
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/huggingface/pytorch-image-models
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_femto.d1_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_femto.d1_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 48, 56, 56])
# torch.Size([1, 96, 28, 28])
# torch.Size([1, 192, 14, 14])
# torch.Size([1, 384, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_femto.d1_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 384, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
|
Suprit/Zhongjing-LLaMA-base | Suprit | 2024-01-13T09:51:04Z | 1,378 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"arxiv:2308.03549",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-14T17:02:02Z | ---
license: mit
language:
- zh
---
Zhongjing is the first Chinese medical LLaMA-based LLM that implements an entire training pipeline from continuous pre-training, SFT, to Reinforcement Learning from Human Feedback (RLHF)
The paper is here: [[2308.03549] Zhongjing: Enhancing the Chinese Medical Capabilities of Large Language Model through Expert Feedback and Real-world Multi-turn Dialogue](https://arxiv.org/abs/2308.03549). |
ajibawa-2023/Python-Code-33B | ajibawa-2023 | 2024-03-04T12:12:38Z | 1,378 | 8 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"code",
"en",
"dataset:ajibawa-2023/Python-Code-23k-ShareGPT",
"license:cc-by-nc-nd-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-13T05:14:01Z | ---
language:
- en
license: cc-by-nc-nd-4.0
tags:
- code
datasets:
- ajibawa-2023/Python-Code-23k-ShareGPT
model-index:
- name: Python-Code-33B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 56.31
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Python-Code-33B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.01
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Python-Code-33B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Python-Code-33B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 44.39
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Python-Code-33B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Python-Code-33B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 19.18
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Python-Code-33B
name: Open LLM Leaderboard
---
**Python-Code-33B**
Large Language Models (LLMs) are good with code generations. Sometimes LLMs do make mistakes in code generation. How about if they can give detailed explanation along with the code.
This is what I have tried over here. The base Llama-2 model was used for training purpose. It is trained on around 23000+ set of codes. Each set having 2 conversations.
This data was generated using GPT-3.5, GPT-4 etc. This conversation is in Vicuna/ShareGPT format. Each set, along with code, has detailed explanation.
I have released the [data](https://huggingface.co/datasets/ajibawa-2023/Python-Code-23k-ShareGPT).
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 42 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.
This is a full fine tuned model. Links for quantized models are given below.
**GPTQ GGML & AWQ**
GPTQ: [Link](https://huggingface.co/TheBloke/Python-Code-33B-GPTQ)
GGUF: [Link](https://huggingface.co/TheBloke/Python-Code-33B-GGUF)
AWQ: [Link](https://huggingface.co/TheBloke/Python-Code-33B-AWQ)
**Example Prompt:**
```
This is a conversation with your helpful AI assistant. AI assistant can generate Python Code along with necessary explanation.
Context
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__Python-Code-33B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |55.06|
|AI2 Reasoning Challenge (25-Shot)|56.31|
|HellaSwag (10-Shot) |81.01|
|MMLU (5-Shot) |54.22|
|TruthfulQA (0-shot) |44.39|
|Winogrande (5-shot) |75.22|
|GSM8k (5-shot) |19.18|
|
TinyPixel/small-llama2 | TinyPixel | 2023-11-22T06:45:35Z | 1,378 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-22T06:40:27Z | Entry not found |
ignos/LeoScorpius-GreenNode-Alpaca-7B-v1 | ignos | 2023-12-15T16:54:06Z | 1,378 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-15T16:04:10Z | ---
license: apache-2.0
---
# Model Card for Model ID
This model is a finetuning of other models based on mistralai/Mistral-7B-v0.1.
## Model Details
### Model Description
The model has been generated from the merging of the models [viethq188/LeoScorpius-7B-Chat-DPO](https://huggingface.co/viethq188/LeoScorpius-7B-Chat-DPO) and [GreenNode/GreenNodeLM-7B-v1olet](https://huggingface.co/GreenNode/GreenNodeLM-7B-v1olet) and a later finetuning with an Alpaca dataset [tatsu-lab/alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca).
- **Developed by:** Ignos
- **Model type:** Mistral
- **License:** Apache-2.0
## Uses
Model created for the comparison of behaviors and metrics with respect to the base model, as well as the comparison with other models that using the same base have been finetuning on other different datasets.
## Bias, Risks, and Limitations
The same bias, risks and limitations from base models.
## Training Details
### Training Data
- [tatsu-lab/alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca)
### Training Procedure
- Training with QLoRA approach and merging with base model.
### Results
- Huggingface evaluation pending
#### Summary
## Technical Specifications
### Model Architecture and Objective
- Models based on Mistral Architecture
### Compute Infrastructure
- Training on RunPod
#### Hardware
- 4 x Nvidia RTX 4090
- 64 vCPU 503 GB RAM
#### Software
- Mergekit (main)
- Axolotl 0.3.0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0
|
samir-fama/FernandoGPT-v1 | samir-fama | 2024-01-12T14:20:50Z | 1,378 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-30T00:10:59Z | ---
license: apache-2.0
language:
- en
tags:
- merge
---

# FernandoGPT-v1
FernandoGPT-v1 is a merge of [cookinai/CatMacaroni-Slerp](https://huggingface.co/cookinai/CatMacaroni-Slerp) and [shadowml/Marcoro14-7B-slerp](https://huggingface.co/shadowml/Marcoro14-7B-slerp) |
We-Want-GPU/SOLAR-10.7B-orca-alpaca-gpt4-math | We-Want-GPU | 2023-12-30T12:05:25Z | 1,378 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-30T10:44:38Z | Entry not found |
TomGrc/FusionNet | TomGrc | 2024-03-04T20:52:57Z | 1,378 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"en",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-31T01:44:05Z | ---
language:
- en
license: mit
tags:
- merge
pipeline_tag: text-generation
model-index:
- name: FusionNet
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.25
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.42
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.36
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.95
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.27
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet
name: Open LLM Leaderboard
---
# FusionNet
Fine-tuned model on English language using Fusion method.
## Model description
The FusionNet is a model to experiment with the "Fusion" method, which could significantly increase the performance of the original model. The FusionNet has 10.7B parameters, and this model is fine-tuned. Enjoy!
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TomGrc__FusionNet)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.38|
|AI2 Reasoning Challenge (25-Shot)|71.25|
|HellaSwag (10-Shot) |88.42|
|MMLU (5-Shot) |66.36|
|TruthfulQA (0-shot) |71.95|
|Winogrande (5-shot) |83.27|
|GSM8k (5-shot) |65.05|
|
bartowski/dolphin-2.9-llama3-70b-GGUF | bartowski | 2024-05-11T10:04:44Z | 1,378 | 1 | null | [
"gguf",
"text-generation",
"en",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:abacusai/SystemChat-1.1",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"license:llama3",
"region:us"
] | text-generation | 2024-05-11T07:08:48Z | ---
license: llama3
language:
- en
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- HuggingFaceH4/ultrachat_200k
- microsoft/orca-math-word-problems-200k
- abacusai/SystemChat-1.1
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of dolphin-2.9-llama3-70b
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2828">b2828</a> for quantization.
Original model: https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-70b
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [dolphin-2.9-llama3-70b-Q8_0.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/tree/main/dolphin-2.9-llama3-70b-Q8_0.gguf) | Q8_0 | 74.97GB | Extremely high quality, generally unneeded but max available quant. |
| [dolphin-2.9-llama3-70b-Q6_K.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/tree/main/dolphin-2.9-llama3-70b-Q6_K.gguf) | Q6_K | 57.88GB | Very high quality, near perfect, *recommended*. |
| [dolphin-2.9-llama3-70b-Q5_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q5_K_M.gguf) | Q5_K_M | 49.94GB | High quality, *recommended*. |
| [dolphin-2.9-llama3-70b-Q5_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q5_K_S.gguf) | Q5_K_S | 48.65GB | High quality, *recommended*. |
| [dolphin-2.9-llama3-70b-Q4_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q4_K_M.gguf) | Q4_K_M | 42.52GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [dolphin-2.9-llama3-70b-Q4_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q4_K_S.gguf) | Q4_K_S | 40.34GB | Slightly lower quality with more space savings, *recommended*. |
| [dolphin-2.9-llama3-70b-IQ4_NL.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ4_NL.gguf) | IQ4_NL | 40.05GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [dolphin-2.9-llama3-70b-IQ4_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ4_XS.gguf) | IQ4_XS | 37.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [dolphin-2.9-llama3-70b-Q3_K_L.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q3_K_L.gguf) | Q3_K_L | 37.14GB | Lower quality but usable, good for low RAM availability. |
| [dolphin-2.9-llama3-70b-Q3_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q3_K_M.gguf) | Q3_K_M | 34.26GB | Even lower quality. |
| [dolphin-2.9-llama3-70b-IQ3_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ3_M.gguf) | IQ3_M | 31.93GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [dolphin-2.9-llama3-70b-IQ3_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ3_S.gguf) | IQ3_S | 30.91GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [dolphin-2.9-llama3-70b-Q3_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q3_K_S.gguf) | Q3_K_S | 30.91GB | Low quality, not recommended. |
| [dolphin-2.9-llama3-70b-IQ3_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ3_XS.gguf) | IQ3_XS | 29.30GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [dolphin-2.9-llama3-70b-IQ3_XXS.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ3_XXS.gguf) | IQ3_XXS | 27.46GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [dolphin-2.9-llama3-70b-Q2_K.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q2_K.gguf) | Q2_K | 26.37GB | Very low quality but surprisingly usable. |
| [dolphin-2.9-llama3-70b-IQ2_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ2_M.gguf) | IQ2_M | 24.11GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [dolphin-2.9-llama3-70b-IQ2_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ2_S.gguf) | IQ2_S | 22.24GB | Very low quality, uses SOTA techniques to be usable. |
| [dolphin-2.9-llama3-70b-IQ2_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ2_XS.gguf) | IQ2_XS | 21.14GB | Very low quality, uses SOTA techniques to be usable. |
| [dolphin-2.9-llama3-70b-IQ2_XXS.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ2_XXS.gguf) | IQ2_XXS | 19.09GB | Lower quality, uses SOTA techniques to be usable. |
| [dolphin-2.9-llama3-70b-IQ1_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ1_M.gguf) | IQ1_M | 16.75GB | Extremely low quality, *not* recommended. |
| [dolphin-2.9-llama3-70b-IQ1_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ1_S.gguf) | IQ1_S | 15.34GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/dolphin-2.9-llama3-70b-GGUF --include "dolphin-2.9-llama3-70b-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/dolphin-2.9-llama3-70b-GGUF --include "dolphin-2.9-llama3-70b-Q8_0.gguf/*" --local-dir dolphin-2.9-llama3-70b-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (dolphin-2.9-llama3-70b-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
Ayalexf/L3-RP-v3.3-Q8_0-GGUF | Ayalexf | 2024-06-25T15:45:21Z | 1,378 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Sao10K/L3-RP-v3.3",
"region:us"
] | null | 2024-06-25T15:44:41Z | ---
base_model: Sao10K/L3-RP-v3.3
tags:
- llama-cpp
- gguf-my-repo
---
# Ayalexf/L3-RP-v3.3-Q8_0-GGUF
This model was converted to GGUF format from [`Sao10K/L3-RP-v3.3`](https://huggingface.co/Sao10K/L3-RP-v3.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/L3-RP-v3.3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Ayalexf/L3-RP-v3.3-Q8_0-GGUF --hf-file l3-rp-v3.3-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Ayalexf/L3-RP-v3.3-Q8_0-GGUF --hf-file l3-rp-v3.3-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Ayalexf/L3-RP-v3.3-Q8_0-GGUF --hf-file l3-rp-v3.3-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Ayalexf/L3-RP-v3.3-Q8_0-GGUF --hf-file l3-rp-v3.3-q8_0.gguf -c 2048
```
|
Klevin/EMO-Ai-7b-Q8_0-GGUF | Klevin | 2024-06-26T06:27:12Z | 1,378 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Klevin/EMO-Ai-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-26T06:26:41Z | ---
base_model: Klevin/EMO-Ai-7b
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# Klevin/EMO-Ai-7b-Q8_0-GGUF
This model was converted to GGUF format from [`Klevin/EMO-Ai-7b`](https://huggingface.co/Klevin/EMO-Ai-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Klevin/EMO-Ai-7b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Klevin/EMO-Ai-7b-Q8_0-GGUF --hf-file emo-ai-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Klevin/EMO-Ai-7b-Q8_0-GGUF --hf-file emo-ai-7b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Klevin/EMO-Ai-7b-Q8_0-GGUF --hf-file emo-ai-7b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Klevin/EMO-Ai-7b-Q8_0-GGUF --hf-file emo-ai-7b-q8_0.gguf -c 2048
```
|
Davlan/xlm-roberta-base-ner-hrl | Davlan | 2023-08-14T19:35:17Z | 1,377 | 16 | transformers | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
license: afl-3.0
---
Hugging Face's logo
---
language:
- ar
- de
- en
- es
- fr
- it
- lv
- nl
- pt
- zh
- multilingual
---
# xlm-roberta-base-ner-hrl
## Model description
**xlm-roberta-base-ner-hrl** is a **Named Entity Recognition** model for 10 high resourced languages (Arabic, German, English, Spanish, French, Italian, Latvian, Dutch, Portuguese and Chinese) based on a fine-tuned XLM-RoBERTa base model. It has been trained to recognize three types of entities: location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *xlm-roberta-base* model that was fine-tuned on an aggregation of 10 high-resourced languages
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/xlm-roberta-base-ner-hrl")
model = AutoModelForTokenClassification.from_pretrained("Davlan/xlm-roberta-base-ner-hrl")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Nader Jokhadar had given Syria the lead with a well-struck header in the seventh minute."
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
The training data for the 10 languages are from:
Language|Dataset
-|-
Arabic | [ANERcorp](https://camel.abudhabi.nyu.edu/anercorp/)
German | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/)
English | [conll 2003](https://www.clips.uantwerpen.be/conll2003/ner/)
Spanish | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/)
French | [Europeana Newspapers](https://github.com/EuropeanaNewspapers/ner-corpora/tree/master/enp_FR.bnf.bio)
Italian | [Italian I-CAB](https://ontotext.fbk.eu/icab.html)
Latvian | [Latvian NER](https://github.com/LUMII-AILab/FullStack/tree/master/NamedEntities)
Dutch | [conll 2002](https://www.clips.uantwerpen.be/conll2002/ner/)
Portuguese |[Paramopama + Second Harem](https://github.com/davidsbatista/NER-datasets/tree/master/Portuguese)
Chinese | [MSRA](https://huggingface.co/datasets/msra_ner)
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-PER |Beginning of a personโs name right after another personโs name
I-PER |Personโs name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on NVIDIA V100 GPU with recommended hyperparameters from HuggingFace code. |
KETI-AIR/ke-t5-base | KETI-AIR | 2023-09-18T01:24:23Z | 1,377 | 18 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"en",
"ko",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2022-03-02T23:29:04Z |
---
language:
- en
- ko
license: apache-2.0
tags:
- t5
eos_token: </s>
widget:
- text: ์๋ฒ์ง๊ฐ ๋ฐฉ์ ๋ค์ด๊ฐ์ ๋ค.</s>
---
# Model Card for ke-t5-base
# Model Details
## Model Description
The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html):
> With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
T5-Base is the checkpoint with 220 million parameters.
- **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
- **Shared by [Optional]:** Korea Electronics Technology Institute Artificial Intelligence Research Center
- **Model type:** Text Generation
- **Language(s) (NLP):**More information needed
- **License:** More information needed
- **Related Models:**
- **Parent Model:** T5
- **Resources for more information:**
- [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
- [KE-T5 Github Repo](https://github.com/AIRC-KETI/ke-t5)
- [Paper](https://aclanthology.org/2021.findings-emnlp.33/)
- [Associated Paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
- [Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
# Uses
## Direct Use
The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model:
> Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
See the [t5-base model card](https://huggingface.co/t5-base?text=My+name+is+Wolfgang+and+I+live+in+Berlin) for further information.
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details.
### Factors
More information needed
### Metrics
More information needed
## Results
For full results for T5-Base, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14.
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Google Cloud TPU Pods
- **Hours used:** More information needed
- **Cloud Provider:** GCP
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
```bibtex
@inproceedings{kim-etal-2021-model-cross,
title = "A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems",
author = "Kim, San and
Jang, Jin Yea and
Jung, Minyoung and
Shin, Saim",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.33",
doi = "10.18653/v1/2021.findings-emnlp.33",
pages = "352--365",
abstract = "Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.",
}
```
```bibtex
@article{2020t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {Journal of Machine Learning Research},
year = {2020},
volume = {21},
number = {140},
pages = {1-67},
url = {http://jmlr.org/papers/v21/20-074.html}
}
```
**APA:**
- Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Korea Electronics Technology Institute Artificial Intelligence Research Center in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("KETI-AIR/ke-t5-base")
model = AutoModelForSeq2SeqLM.from_pretrained("KETI-AIR/ke-t5-base")
```
See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more examples.
</details>
|
sentence-transformers/bert-base-nli-cls-token | sentence-transformers | 2024-03-27T09:56:58Z | 1,377 | 2 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:1908.10084",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | sentence-similarity | 2022-03-02T23:29:05Z | ---
license: apache-2.0
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
pipeline_tag: sentence-similarity
---
# bert-base-nli-cls-token
**โ ๏ธ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)**
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sentence-transformers/bert-base-nli-cls-token')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/bert-base-nli-cls-token')
model = AutoModel.from_pretrained('sentence-transformers/bert-base-nli-cls-token')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/bert-base-nli-cls-token)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
This model was trained by [sentence-transformers](https://www.sbert.net/).
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "http://arxiv.org/abs/1908.10084",
}
``` |
Weyaxi/TekniumAiroboros-Nebula-7B | Weyaxi | 2023-11-25T17:39:52Z | 1,377 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-08T13:57:10Z |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__TekniumAiroboros-Nebula-7B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 52.82 |
| ARC (25-shot) | 57.17 |
| HellaSwag (10-shot) | 81.72 |
| MMLU (5-shot) | 55.25 |
| TruthfulQA (0-shot) | 51.64 |
| Winogrande (5-shot) | 73.24 |
| GSM8K (5-shot) | 9.4 |
| DROP (3-shot) | 41.33 |
|
TheBloke/deepseek-coder-1.3b-base-GGUF | TheBloke | 2023-11-05T15:22:09Z | 1,377 | 5 | transformers | [
"transformers",
"gguf",
"deepseek",
"base_model:deepseek-ai/deepseek-coder-1.3b-base",
"license:other",
"region:us"
] | null | 2023-11-05T13:28:51Z | ---
base_model: deepseek-ai/deepseek-coder-1.3b-base
inference: false
license: other
license_link: LICENSE
license_name: deepseek-license
model_creator: DeepSeek
model_name: Deepseek Coder 1.3B Base
model_type: deepseek
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Deepseek Coder 1.3B Base - GGUF
- Model creator: [DeepSeek](https://huggingface.co/deepseek-ai)
- Original model: [Deepseek Coder 1.3B Base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base)
<!-- description start -->
## Description
This repo contains GGUF format model files for [DeepSeek's Deepseek Coder 1.3B Base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/deepseek-coder-1.3b-base-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/deepseek-coder-1.3b-base-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/deepseek-coder-1.3b-base-GGUF)
* [DeepSeek's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [deepseek-coder-1.3b-base.Q2_K.gguf](https://huggingface.co/TheBloke/deepseek-coder-1.3b-base-GGUF/blob/main/deepseek-coder-1.3b-base.Q2_K.gguf) | Q2_K | 2 | 0.63 GB| 3.13 GB | smallest, significant quality loss - not recommended for most purposes |
| [deepseek-coder-1.3b-base.Q3_K_S.gguf](https://huggingface.co/TheBloke/deepseek-coder-1.3b-base-GGUF/blob/main/deepseek-coder-1.3b-base.Q3_K_S.gguf) | Q3_K_S | 3 | 0.66 GB| 3.16 GB | very small, high quality loss |
| [deepseek-coder-1.3b-base.Q3_K_M.gguf](https://huggingface.co/TheBloke/deepseek-coder-1.3b-base-GGUF/blob/main/deepseek-coder-1.3b-base.Q3_K_M.gguf) | Q3_K_M | 3 | 0.70 GB| 3.20 GB | very small, high quality loss |
| [deepseek-coder-1.3b-base.Q3_K_L.gguf](https://huggingface.co/TheBloke/deepseek-coder-1.3b-base-GGUF/blob/main/deepseek-coder-1.3b-base.Q3_K_L.gguf) | Q3_K_L | 3 | 0.74 GB| 3.24 GB | small, substantial quality loss |
| [deepseek-coder-1.3b-base.Q4_0.gguf](https://huggingface.co/TheBloke/deepseek-coder-1.3b-base-GGUF/blob/main/deepseek-coder-1.3b-base.Q4_0.gguf) | Q4_0 | 4 | 0.78 GB| 3.28 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [deepseek-coder-1.3b-base.Q4_K_S.gguf](https://huggingface.co/TheBloke/deepseek-coder-1.3b-base-GGUF/blob/main/deepseek-coder-1.3b-base.Q4_K_S.gguf) | Q4_K_S | 4 | 0.81 GB| 3.31 GB | small, greater quality loss |
| [deepseek-coder-1.3b-base.Q4_K_M.gguf](https://huggingface.co/TheBloke/deepseek-coder-1.3b-base-GGUF/blob/main/deepseek-coder-1.3b-base.Q4_K_M.gguf) | Q4_K_M | 4 | 0.87 GB| 3.37 GB | medium, balanced quality - recommended |
| [deepseek-coder-1.3b-base.Q5_0.gguf](https://huggingface.co/TheBloke/deepseek-coder-1.3b-base-GGUF/blob/main/deepseek-coder-1.3b-base.Q5_0.gguf) | Q5_0 | 5 | 0.94 GB| 3.44 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [deepseek-coder-1.3b-base.Q5_K_S.gguf](https://huggingface.co/TheBloke/deepseek-coder-1.3b-base-GGUF/blob/main/deepseek-coder-1.3b-base.Q5_K_S.gguf) | Q5_K_S | 5 | 0.95 GB| 3.45 GB | large, low quality loss - recommended |
| [deepseek-coder-1.3b-base.Q5_K_M.gguf](https://huggingface.co/TheBloke/deepseek-coder-1.3b-base-GGUF/blob/main/deepseek-coder-1.3b-base.Q5_K_M.gguf) | Q5_K_M | 5 | 1.00 GB| 3.50 GB | large, very low quality loss - recommended |
| [deepseek-coder-1.3b-base.Q6_K.gguf](https://huggingface.co/TheBloke/deepseek-coder-1.3b-base-GGUF/blob/main/deepseek-coder-1.3b-base.Q6_K.gguf) | Q6_K | 6 | 1.17 GB| 3.67 GB | very large, extremely low quality loss |
| [deepseek-coder-1.3b-base.Q8_0.gguf](https://huggingface.co/TheBloke/deepseek-coder-1.3b-base-GGUF/blob/main/deepseek-coder-1.3b-base.Q8_0.gguf) | Q8_0 | 8 | 1.43 GB| 3.93 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/deepseek-coder-1.3b-base-GGUF and below it, a specific filename to download, such as: deepseek-coder-1.3b-base.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/deepseek-coder-1.3b-base-GGUF deepseek-coder-1.3b-base.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/deepseek-coder-1.3b-base-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/deepseek-coder-1.3b-base-GGUF deepseek-coder-1.3b-base.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m deepseek-coder-1.3b-base.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/deepseek-coder-1.3b-base-GGUF", model_file="deepseek-coder-1.3b-base.Q4_K_M.gguf", model_type="deepseek", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, ้ฟๆ, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjรคreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: DeepSeek's Deepseek Coder 1.3B Base
<p align="center">
<img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[๐ Homepage]</a> | <a href="https://coder.deepseek.com/">[๐ค Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(ๅพฎไฟก)]</a> </p>
<hr>
### 1. Introduction of Deepseek Coder
Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
- **Massive Training Data**: Trained from scratch on 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
- **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements.
- **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks.
- **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks.
### 2. Model Summary
deepseek-coder-1.3b-base is a 1.3B parameter model with Multi-Head Attention trained on 1 trillion tokens.
- **Home Page:** [DeepSeek](https://deepseek.com/)
- **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder)
- **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/)
### 3. How to Use
Here give some examples of how to use our model.
#### 1๏ผCode Completion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True).cuda()
input_text = "#write a quick sort algorithm"
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
#### 2๏ผCode Insertion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True).cuda()
input_text = """<๏ฝfimโbegin๏ฝ>def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[0]
left = []
right = []
<๏ฝfimโhole๏ฝ>
if arr[i] < pivot:
left.append(arr[i])
else:
right.append(arr[i])
return quick_sort(left) + [pivot] + quick_sort(right)<๏ฝfimโend๏ฝ>"""
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)[len(input_text):])
```
#### 3๏ผRepository Level Code Completion
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base", trust_remote_code=True).cuda()
input_text = """#utils.py
import torch
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score
def load_data():
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Standardize the data
scaler = StandardScaler()
X = scaler.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Convert numpy data to PyTorch tensors
X_train = torch.tensor(X_train, dtype=torch.float32)
X_test = torch.tensor(X_test, dtype=torch.float32)
y_train = torch.tensor(y_train, dtype=torch.int64)
y_test = torch.tensor(y_test, dtype=torch.int64)
return X_train, X_test, y_train, y_test
def evaluate_predictions(y_test, y_pred):
return accuracy_score(y_test, y_pred)
#model.py
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset
class IrisClassifier(nn.Module):
def __init__(self):
super(IrisClassifier, self).__init__()
self.fc = nn.Sequential(
nn.Linear(4, 16),
nn.ReLU(),
nn.Linear(16, 3)
)
def forward(self, x):
return self.fc(x)
def train_model(self, X_train, y_train, epochs, lr, batch_size):
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(self.parameters(), lr=lr)
# Create DataLoader for batches
dataset = TensorDataset(X_train, y_train)
dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True)
for epoch in range(epochs):
for batch_X, batch_y in dataloader:
optimizer.zero_grad()
outputs = self(batch_X)
loss = criterion(outputs, batch_y)
loss.backward()
optimizer.step()
def predict(self, X_test):
with torch.no_grad():
outputs = self(X_test)
_, predicted = outputs.max(1)
return predicted.numpy()
#main.py
from utils import load_data, evaluate_predictions
from model import IrisClassifier as Classifier
def main():
# Model training and evaluation
"""
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=140)
print(tokenizer.decode(outputs[0]))
```
### 4. License
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
### 5. Contact
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
<!-- original-model-card end -->
|
LTC-AI-Labs/L2-7b-Hermes-Synthia | LTC-AI-Labs | 2024-04-12T01:50:56Z | 1,377 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"roleplay",
"conversational",
"en",
"dataset:migtissera/Synthia-v1.3",
"dataset:open-llm-leaderboard/details_NousResearch__Nous-Hermes-llama-2-7b",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-23T15:17:39Z | ---
language:
- en
license: llama2
tags:
- roleplay
- conversational
datasets:
- migtissera/Synthia-v1.3
- open-llm-leaderboard/details_NousResearch__Nous-Hermes-llama-2-7b
pipeline_tag: text-generation
model-index:
- name: L2-7b-Hermes-Synthia
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 51.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LTC-AI-Labs/L2-7b-Hermes-Synthia
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 79.12
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LTC-AI-Labs/L2-7b-Hermes-Synthia
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.88
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LTC-AI-Labs/L2-7b-Hermes-Synthia
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 46.77
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LTC-AI-Labs/L2-7b-Hermes-Synthia
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LTC-AI-Labs/L2-7b-Hermes-Synthia
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 13.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=LTC-AI-Labs/L2-7b-Hermes-Synthia
name: Open LLM Leaderboard
---
Fine-tuned the synthia dataset on the hermes2 7b model
In my opinion it's probably the best model I fine-tuned in-terms of role-playing (tested on LavernAI)
Future plans:
- I'll probably do more test in other areas
- Will add other languages (Potentially japanese and chinese)
- Finetune it on mistral models?
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_LTC-AI-Labs__L2-7b-Hermes-Synthia)
| Metric |Value|
|---------------------------------|----:|
|Avg. |52.21|
|AI2 Reasoning Challenge (25-Shot)|51.02|
|HellaSwag (10-Shot) |79.12|
|MMLU (5-Shot) |47.88|
|TruthfulQA (0-shot) |46.77|
|Winogrande (5-shot) |74.51|
|GSM8k (5-shot) |13.95|
|
Korabbit/Llama-2-7b-chat-hf-afr-200step-flan | Korabbit | 2023-11-30T12:37:15Z | 1,377 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:sordonia/flan-10k-flat",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-30T11:40:24Z | ---
license: llama2
datasets:
- sordonia/flan-10k-flat
language:
- en
---
Hi there! this is test my approach "AFR training"
The base model used to "Llama-2-7b-chat"
System Prompt: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. \n If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
Instruction: Implement binary search in python.
Response: Sure! Here is an example of how to implement a binary search algorithm in Python:
```
def binary_search(arr, target):
# Initialize the index of the target element
low = 0
high = len(arr) - 1
# Loop until the target element is found or the end of the array is reached
while low <= high:
# Calculate the middle index of the array
mid = (low + high) // 2
# Check if the target element is in the middle index
if arr[mid] == target:
return mid
# If the target element is not in the middle index, check if it is less than the middle index
if arr[mid] > target:
low = mid + 1
else:
high = mid - 1
# If the target element is not found in the array, return -1
return -1
# Test the binary search function
arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
target = 5
result = binary_search(arr, target)
print(f"The target element {target} is at index {result}")
```
This code defines a `binary_search` function that takes an array and a target element as input, and returns the index of the target element in the array if it is found, or -1 if it is not found.
The function works by iteratively dividing the range of the array into two sub-ranges, one containing elements less than the target, and one containing elements greater than the target. It then checks the middle index of each sub-range to see if the target element is present. If the target element is found, the function returns the index of the middle index. If the target element is not found, the function returns -1.
The code also includes a test case to demonstrate how to use the `binary_search` function. In this case, the target element is 5, and the function returns the index of the element 5 in the array.
I hope this helps! Let me know if you have any questions. |
maywell/PiVoT-SOLAR-10.7B-RP | maywell | 2023-12-16T12:09:02Z | 1,377 | 7 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-16T11:21:05Z | ---
license: cc-by-sa-4.0
language:
- en
pipeline_tag: text-generation
---
# PiVoT-SOLAR-10.7B-RP

# **Model Details**
### Description
PivoT is RP Finetuned model based on SOLAR-10.7B-Instruct. Private RP dataset collected from ArcaLive Ai Chat Chan used.
This model has sequence length of **8192**
Follow me on twitter: https://twitter.com/stablefluffy
Consider Support me making these model alone: https://www.buymeacoffee.com/mwell or with Runpod Credit Gift ๐
Contact me on Telegram: https://t.me/AlzarTakkarsen |
cookinai/BruinHermes | cookinai | 2024-01-03T21:20:15Z | 1,377 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:unknown",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-17T17:03:22Z | ---
license: unknown
tags:
- merge
---
Slerp Merge of rwitz2/go-bruins-v2.1.1 and Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
.yaml file for mergekit
```.yaml:
slices:
- sources:
- model: rwitz2/go-bruins-v2.1.1
layer_range: [0, 32]
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
layer_range: [0, 32]
merge_method: slerp
base_model: rwitz2/go-bruins-v2.1.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
``` |
mlabonne/GML-Mistral-merged-v1 | mlabonne | 2024-03-04T15:14:25Z | 1,377 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-27T15:38:06Z | ---
license: apache-2.0
tags:
- merge
- mergekit
model-index:
- name: GML-Mistral-merged-v1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 43.77
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/GML-Mistral-merged-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 57.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/GML-Mistral-merged-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.13
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/GML-Mistral-merged-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 51.58
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/GML-Mistral-merged-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.88
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/GML-Mistral-merged-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/GML-Mistral-merged-v1
name: Open LLM Leaderboard
---
# GML-Mistral-merged-v1
Quick attempt to reproduce [zyh3826/GML-Mistral-merged-v1](https://huggingface.co/zyh3826/GML-Mistral-merged-v1).
This model is a merge of the following models made with [mergekit](https://github.com/cg123/mergekit):
* [quantumaikr/quantum-v0.01](https://huggingface.co/quantumaikr/quantum-v0.01)
* [mncai/mistral-7b-dpo-v5](https://huggingface.co/mncai/mistral-7b-dpo-v5)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: quantumaikr/quantum-v0.01
layer_range: [0, 32]
- sources:
- model: mncai/mistral-7b-dpo-v5
layer_range: [24, 32]
merge_method: passthrough
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__GML-Mistral-merged-v1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |48.54|
|AI2 Reasoning Challenge (25-Shot)|43.77|
|HellaSwag (10-Shot) |57.89|
|MMLU (5-Shot) |64.13|
|TruthfulQA (0-shot) |51.58|
|Winogrande (5-shot) |73.88|
|GSM8k (5-shot) | 0.00|
|
Azazelle/Half-NSFW_Noromaid-7b | Azazelle | 2023-12-29T19:15:01Z | 1,377 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-29T19:02:44Z | ---
pipeline_tag: text-generation
tags:
- mistral
- merge
license: cc-by-4.0
---
# Model Card for Half-NSFW_Noromaid-7b
<!-- Provide a quick summary of what the model is/does. -->
Slerp merge of Noromaid-7b-v0.2 and NSFW_DPO_Noromaid-7b.
.yaml file for mergekit
```.yaml:
slices:
- sources:
- model: NeverSleep/Noromaid-7b-v0.2
layer_range: [0, 32]
- model: athirdpath/NSFW_DPO_Noromaid-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0.07, 0.53, 0.35, 0.72, 1]
- filter: mlp
value: [0.93, 0.47, 0.65, 0.28, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
``` |
DopeorNope/SOLARC-MOE-10.7Bx6 | DopeorNope | 2024-02-01T14:26:18Z | 1,377 | 16 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"MoE",
"conversational",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-30T08:02:47Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
tags:
- moe
- merge
- MoE
---
**The license is `cc-by-nc-sa-4.0`.**
# **๐ปโโ๏ธSOLARC-MOE-10.7Bx6๐ปโโ๏ธ**

## Model Details
**Model Developers** Seungyoo Lee(DopeorNope)
I am in charge of Large Language Models (LLMs) at Markr AI team in South Korea.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
SOLARC-MOE-10.7Bx6 is an auto-regressive language model based on the SOLAR architecture.
---
## **Base Model**
[kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct)
[Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct](https://huggingface.co/Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct)
[VAGOsolutions/SauerkrautLM-SOLAR-Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-SOLAR-Instruct)
[fblgit/UNA-SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/fblgit/UNA-SOLAR-10.7B-Instruct-v1.0)
[jeonsworld/CarbonVillain-en-10.7B-v1](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v1)
## **Implemented Method**
I have built a model using the Mixture of Experts (MOE) approach, utilizing each of these models as the base.
I wanted to test if it was possible to compile with a non-power of 2, like with 6
---
# Implementation Code
## Load model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "DopeorNope/SOLARC-MOE-10.7Bx6"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float32,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- |
occultml/Helios-10.7B | occultml | 2024-04-01T16:01:48Z | 1,377 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"conversational",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-31T15:14:38Z | ---
license: apache-2.0
tags:
- merge
- mergekit
model-index:
- name: Helios-10.7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 38.91
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=occultml/Helios-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 46.6
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=occultml/Helios-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 41.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=occultml/Helios-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 55.52
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=occultml/Helios-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=occultml/Helios-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=occultml/Helios-10.7B
name: Open LLM Leaderboard
---
# Helios-10.7B
This model is a merge of the following models made with [mergekit](https://github.com/cg123/mergekit):
* [jeonsworld/CarbonVillain-en-10.7B-v4](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v4)
* [kekmodel/StopCarbon-10.7B-v5](https://huggingface.co/kekmodel/StopCarbon-10.7B-v5)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: jeonsworld/CarbonVillain-en-10.7B-v4
layer_range: [0, 32]
- model: kekmodel/StopCarbon-10.7B-v5
layer_range: [0, 32]
merge_method: slerp
base_model: jeonsworld/CarbonVillain-en-10.7B-v4
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_occultml__Helios-10.7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |42.19|
|AI2 Reasoning Challenge (25-Shot)|38.91|
|HellaSwag (10-Shot) |46.60|
|MMLU (5-Shot) |41.40|
|TruthfulQA (0-shot) |55.52|
|Winogrande (5-shot) |70.72|
|GSM8k (5-shot) | 0.00|
|
TomGrc/FusionNet_34Bx2_MoE | TomGrc | 2024-03-04T20:52:36Z | 1,377 | 8 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"conversational",
"en",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-21T04:19:27Z | ---
language:
- en
license: mit
tags:
- moe
pipeline_tag: text-generation
model-index:
- name: FusionNet_34Bx2_MoE
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.95
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_34Bx2_MoE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.22
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_34Bx2_MoE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.05
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_34Bx2_MoE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.31
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_34Bx2_MoE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_34Bx2_MoE
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.89
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_34Bx2_MoE
name: Open LLM Leaderboard
---
# FusionNet_34Bx2_MoE
Fine-tuned model on English language using MoE method.
## Model description
The FusionNet_34Bx2_MoE is a model to experiment with the MoE method, which could significantly increase the performance of the original model. The FusionNet_34Bx2_MoE has 60.8B parameters, and this model is fine-tuned. Enjoy!
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("TomGrc/FusionNet_34Bx2_MoE")
model = AutoModelForCausalLM.from_pretrained("TomGrc/FusionNet_34Bx2_MoE")
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TomGrc__FusionNet_34Bx2_MoE)
| Metric |Value|
|---------------------------------|----:|
|Avg. |77.07|
|AI2 Reasoning Challenge (25-Shot)|72.95|
|HellaSwag (10-Shot) |86.22|
|MMLU (5-Shot) |77.05|
|TruthfulQA (0-shot) |71.31|
|Winogrande (5-shot) |83.98|
|GSM8k (5-shot) |70.89|
|
microsoft/SportsBERT | microsoft | 2022-12-10T18:18:40Z | 1,376 | 15 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-03-02T23:29:05Z | Pretraining large natural language processing models such as BERT, RoBERTa, etc are now state of the art models in natural language understanding and processing tasks. However, these models are trained on a general corpus of articles from the web or from repositories like quora, wikipedia, etc which contain articles of all domains and backgrounds. Training domain specific language model has proven to perform better than pretrained general models in domains like Medicine. With that knowledge, we went on to train a sports specific BERT based transformers model, SportsBERT.
SportsBERT is a BERT model trained from scratch with specific focus on sports articles. The training corpus included news articles scraped from the web related to sports from the past 4 years. These articles covered news from Football, Basketball, Hockey, Cricket, Soccer, Baseball, Olympics, Tennis, Golf, MMA, etc. There were approximately 8 million training samples which were used to train this model. A tokenizer was trained from scratch to include more sports related tokens to the vocabulary. The architecture used in this model is the BERT base uncased architecture. The model was trained on four V100 GPUs. It's a MLM based transformers model and the primary task of the model is to fill in missing masked tokens. For example,
"Anthony Davis is a [MASK]" would give out the tokens "legend", "superstar", "rookie", "star", "king" in descending confidences.
This model can then be used to fine tune for other tasks such as classification, entity extraction, etc.
Language: English
pipeline_tag: fill-mask
Authors: Prithvishankar Srinivasan ([email protected]) |
Weyaxi/neural-chat-7b-v3-1-OpenHermes-2.5-7B | Weyaxi | 2023-12-01T12:16:02Z | 1,376 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-01T12:05:48Z | ---
license: apache-2.0
---
Merge of [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) and [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) using ties merge.
### *Weights*
- [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1): 0.5
- [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B): 0.3
### *Density*
- [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1): 0.5
- [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B): 0.5 |
Undi95/Clover3-17B | Undi95 | 2023-12-11T22:42:48Z | 1,376 | 10 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"not-for-all-audiences",
"nsfw",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-11T19:17:30Z | ---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---

Clover3 is a model made primarely for ERP (Erotic Roleplay), using a base Frankenmerge of 3x Mistral-7B and finetuned on top of it with RP dataset to avoid garbage reply.
This bot is EROTIC most of the time. Be warned!
<!-- description start -->
## Description
This repo contains fp16 files of Clover3-17B.
<!-- description end -->
<!-- description start -->
## Models used
- NurtureAI/neural-chat-7b-v3-16k
- NeverSleep/Noromaid-7b-v0.1.1/
- lemonilia/AshhLimaRP-Mistral-7B
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
Thanks to the [MinervaAI Team](https://huggingface.co/MinervaAI) and, in particular, [Gryphe](https://huggingface.co/Gryphe) for letting me use some of his work!
If you want to support me, you can [here](https://ko-fi.com/undiai).
|
ignos/LeoScorpius-GreenNode-Platypus-7B-v1 | ignos | 2023-12-15T10:49:57Z | 1,376 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-15T09:54:24Z | ---
license: apache-2.0
---
# Model Card for Model ID
This model is a finetuning of other models based on mistralai/Mistral-7B-v0.1.
## Model Details
### Model Description
The model has been generated from the merging of the models [viethq188/LeoScorpius-7B-Chat-DPO](https://huggingface.co/viethq188/LeoScorpius-7B-Chat-DPO) and [GreenNode/GreenNodeLM-7B-v1olet](https://huggingface.co/GreenNode/GreenNodeLM-7B-v1olet) and a later finetuning with a Platypus dataset [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
- **Developed by:** Ignos
- **Model type:** Mistral
- **License:** Apache-2.0
## Uses
The model aims to have good overall comparative results on HuggingFace metrics, improving reasoning.
## Bias, Risks, and Limitations
The same bias, risks and limitations from base models.
## Training Details
### Training Data
- [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
### Training Procedure
- Training with QLoRA approach and merging with base model.
### Results
- Huggingface evaluation pending
#### Summary
## Technical Specifications
### Model Architecture and Objective
- Models based on Mistral Architecture
### Compute Infrastructure
- Training on RunPod
#### Hardware
- 4 x Nvidia RTX 4090
- 64 vCPU 503 GB RAM
#### Software
- Mergekit (main)
- Axolotl 0.3.0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0
|
Brillibits/Instruct_Mixtral-8x7B-v0.1_Dolly15K | Brillibits | 2023-12-19T03:34:01Z | 1,376 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:databricks/databricks-dolly-15k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-19T02:38:41Z | ---
license: apache-2.0
datasets:
- databricks/databricks-dolly-15k
pipeline_tag: text-generation
---
# Instruct_Mixtral-8x7B-v0.1_Dolly15K
Fine-tuned from Mixtral-8x7B-v0.1๏ผ used Dolly15k for the dataset. 85% for training, 14.9% validation, 0.1% test. Trained for 1.0 epochs using QLora. Trained with 1024 context window.
# Model Details
* **Trained by**: trained by [Brillibits](https://www.youtube.com/@Brillibits).
* **Model type:** **Instruct_Mixtral-8x7B-v0.1_Dolly15K** is an auto-regressive language model based on the Llama 2 transformer architecture.
* **Language(s)**: English
* **License for Instruct_Mixtral-8x7B-v0.1_Dolly15K**: apache-2.0 license
# Prompting
## Prompt Template With Context
```
Write a 10-line poem about a given topic
Input:
The topic is about racecars
Output:
```
## Prompt Template Without Context
```
Who was the was the second president of the United States?
Output:
```
## Professional Assistance
This model and other models like it are great, but where LLMs hold the most promise is when they are applied on custom data to automate a wide variety of tasks
If you have a dataset and want to see if you might be able to apply that data to automate some tasks, and you are looking for professional assistance, contact me [here](mailto:[email protected]) |
l3utterfly/minima-3b-layla-v2 | l3utterfly | 2023-12-19T07:49:06Z | 1,376 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-19T04:43:58Z | ---
license: llama2
language:
- en
---
# Model Card
### Model Description
[MiniMA-3B](https://huggingface.co/GeneZC/MiniMA-3B) (from GeneZC) fine-tuned by:
1. Teatime Roleplay dataset for text completion
2. ShareGPT datasets for multi-turn conversations.
- **Developed by:** l3utterfly
- **Funded by:** Layla Network
- **Model type:** Llama2
- **Language(s) (NLP):** English
- **License:** Llama2
- **Finetuned from model:** MiniMA-3B
## Uses
Base model used by Layla - the offline personal assistant: https://www.layla-network.ai
Help & support: https://discord.gg/x546YJ6nYC
Prompt:
```
USER:
ASSISTANT:
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
vikash06/llama-2-7b-small-model-new | vikash06 | 2024-03-04T15:35:47Z | 1,376 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-22T04:57:34Z | ---
license: mit
model-index:
- name: llama-2-7b-small-model-new
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 45.22
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/llama-2-7b-small-model-new
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 72.35
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/llama-2-7b-small-model-new
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 46.23
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/llama-2-7b-small-model-new
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 42.46
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/llama-2-7b-small-model-new
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/llama-2-7b-small-model-new
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 9.55
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vikash06/llama-2-7b-small-model-new
name: Open LLM Leaderboard
---
This model is trained on experimental basis on a small dataset to assess whether training longer on a smaller dataset has a good performance or not.
# Model Details
vikash06/llama-2-7b-small-model--> Finetuned model on llama2
# Uses
# Creative Writing: Write a question or instruction that requires a creative, open-ended written response.
The instruction should be reasonable to ask of a person with general world knowledge and should not require searching.
In this task, your prompt should give very specific instructions to follow.
Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
# Closed QA: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia.
The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge.
To create a question for this task include both the text of the question as well as the reference text in the form.
# Open QA: Write a question that can be answered using general world knowledge or at most a single search.
This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
# Summarization: Give a summary of a paragraph from Wikipedia.
Please don't ask questions that will require more than 3-5 minutes to answer.
To create a question for this task include both the text of the question as well as the reference text in the form.
# Information Extraction: These questions involve reading a paragraph from Wikipedia and extracting information from the passage.
Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages.
To create a question for this task include both the text of the question as well as the reference text in the form.
# Classification: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc.
In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.).
You can choose any categories for classification you like, the more diverse the better.
# Brainstorming: Think up lots of examples in response to a question asking to brainstorm ideas
# Direct Use
The model is intnded for direct use
# How to Get Started with the Model
import torch
import pandas as pd
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("vikash06/llama-2-7b-small-model")
model = AutoModelForCausalLM.from_pretrained("vikash06/llama-2-7b-small-model", torch_dtype=torch.float16, device_map="cuda:0")
print (model)
def generate_training_prompt(instruction,context):
return f"""
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: {instruction}
### Context:
{context.strip()}
""".strip()
data1 ={"instruction": "When was the first Reading railway station opened?", "context": "Reading railway station is a major transport hub in Reading, Berkshire, England. It is on the northern edge of the town centre, near the main retail and commercial areas and the River Thames, 36 miles (58 km) from London Paddington. The first Reading station was opened on 30 March 1840 as the temporary western terminus of the original line of the Great Western Railway (GWR). Reading is the ninth-busiest station in the UK outside London and the second busiest interchange station outside London with over 3.8 million passengers changing trains at the station annually.", "response": "The first Reading railway station was opened on the 30th of March, 1840.", "category": "closed_qa"}
prompt = generate_training_prompt(data1["instruction"],data1["context"])
input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda(0)
outputs = model.generate(input_ids=input_ids, max_new_tokens=128, do_sample=True, top_p=0.9,temperature=0.3)
resp = tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0][len(prompt):].split("\n")
resp = [x for x in resp if x!='']
print(resp)
# Training Data
1000 samples were carefully selected from each of the category.
# Training Procedure
We used the below libraries to finetune the llama2-7b:
torch==2.1.0
transformers==4.35.2
peft@git+https://github.com/huggingface/peft.git bitsandbytes==0.41.1 trl @ git+https://github.com/lvwerra/trl.git@34e6948d459540a21f80c5be227fb4da039dd97a
We used batch size 0f 2 on 50 epochs
# Evaluation
We performed hellaswag task using evaluation library of EleutherAI: https://github.com/EleutherAI/lm-evaluation-harness
below are the results:


# Environmental Impact
Carbon Emitted: 0.432 kg/kWh Offset: 0% hardware: a6000 48GB(3) hours: 28
# Technical Report
Detail writeup coming soon
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vikash06__llama-2-7b-small-model-new)
| Metric |Value|
|---------------------------------|----:|
|Avg. |46.62|
|AI2 Reasoning Challenge (25-Shot)|45.22|
|HellaSwag (10-Shot) |72.35|
|MMLU (5-Shot) |46.23|
|TruthfulQA (0-shot) |42.46|
|Winogrande (5-shot) |63.93|
|GSM8k (5-shot) | 9.55|
|
Azazelle/Silicon-Medley | Azazelle | 2023-12-29T17:51:40Z | 1,376 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-29T17:13:51Z | ---
pipeline_tag: text-generation
tags:
- mistral
- merge
license: cc-by-4.0
---
# Model Card for Silicon-Medley
<!-- Provide a quick summary of what the model is/does. -->
Slerp merge of Silicon-Maid-7B and piano-medley-7b.
.yaml file for mergekit
```.yaml:
slices:
- sources:
- model: SanjiWatsuki/Silicon-Maid-7B
layer_range: [0, 32]
- model: chargoddard/piano-medley-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0.22, 0.61, 0.46, 0.77, 1]
- filter: mlp
value: [0.78, 0.39, 0.54, 0.23, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
``` |
abacusai/Slerp-CM-mist-dpo | abacusai | 2024-01-17T14:13:18Z | 1,376 | 6 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-03T21:56:48Z | ---
license: apache-2.0
tags:
- merge
---

This model is a [Slerp Merge](https://github.com/cg123/mergekit/blob/main/mergekit/merge_methods/slerp.py) of [cookinai/CatMacaroni-Slerp](https://huggingface.co/cookinai/CatMacaroni-Slerp) and [mncai/mistral-7b-dpo-v5](https://huggingface.co/mncai/mistral-7b-dpo-v5).
# Evaluation Results
### HuggingFace Leaderboard
| Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- |
| 73.1 | 69.62 | 87.09 | 64.81 | 62.82 | 81.45 | 72.78 |
The model did achieve an improvement in TruthfulQA over `cookinai/CatMacaroni-Slerp` and GSM8K over `mncai/mistral-7b-dpo-v5`
which was the goal of the merge leading to an average score that was a better than both. It is unclear why the TruthfulQA metric
is still meaningfully lower than the base `mncai/mistral-7b-dpo-v5`.
# Training Details
.yaml file for mergekit
```yaml
slices:
- sources:
- model: cookinai/CatMacaroni-Slerp
layer_range: [0, 32]
- model: mncai/mistral-7b-dpo-v5
layer_range: [0, 32]
merge_method: slerp
base_model: mncai/mistral-7b-dpo-v5
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
# Bias, Risks, and Limitations
The model has not been evaluated for safety and is only intended for research and experiments. |
yaojialzc/Gigi-Llama3-8B-Chinese-zh | yaojialzc | 2024-04-24T02:23:11Z | 1,376 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"latest",
"conversational",
"en",
"zh",
"dataset:teknium/OpenHermes-2.5",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-20T14:27:25Z | ---
license: apache-2.0
language:
- en
- zh
datasets:
- teknium/OpenHermes-2.5
pipeline_tag: text-generation
tags:
- llama
- latest
library_name: transformers
---

Gigi is fine-tuned on over 1.3 million pieces of high-quality Chinese-English bilingual corpus screened with the state-of-the-art Llama-3-8B-Instruct. It can better handle various downstream tasks and provide you with high-quality Chinese-English bilingual results. We incorporated high-quality fine-tuning data, such as Hermes and glaive-function-calling instructions, into the training, as well as a large amount of GPT4 data translated using GPT3.5. Gigi can meet your needs well in Chinese-English bilingual contexts.
`Gigi` ๆฏ [Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) ๅจ่ถ
่ฟ130ไธๆก็ป่ฟ็ญ้็้ซ่ดจ้ไธญ่ฑๅ่ฏญ่ฏญๆไธ็็ฒพ่ฐๆจกๅ๏ผๆๆพๅขๅผบไธญๆ่ฝๅใ
่ฎญ็ปๆฐๆฎๆฅๆบ๏ผ
- **่ฑๆ**๏ผ[OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) ๅ
ๅซ่ถ
่ฟ 100 ไธๆก GPT-4 ็ๆ็ฒพ่ฐๆฐๆฎ
- **ไธญๆ**๏ผ่ถ
่ฟ20ไธๆก๏ผๅ
ๅซๅคไธช้ซ่ดจ้ไธญๆ SFT ๆฐๆฎ้ๅๅๆ กๆญฃ็ฟป่ฏ็ GPT-4 ็ๆๆฐๆฎใ
# Gigi-Llama-3-8B-zh
Gigi-Llama-3-8B-zh is the first model in the Gigi series, trained on the Hermes, glaive-function-calling, refgpt_fact_v2 datasets, and some Chinese data translated using GPT3.5. It has also improved the model's behavior in both Chinese and English and further enhanced its Chinese capabilities by incorporating datasets such as COIG-CQIA and alpaca-gpt4-data-zh.
Gigi-Llama-3-8B-zh ๆฏ Gigi ็ณปๅ็็ฌฌไธไธชๆจกๅ๏ผๅจHermesใglaive-function-callingใrefgpt_fact_v2ๆฐๆฎ้ไปฅๅไธ้จๅไฝฟ็จGPT3.5็ฟป่ฏๆ็ไธญๆๆฐๆฎไธ่ฎญ็ป๏ผๅๆถๆน่ฟไบๆจกๅๅจไธญ่ฑๆไธ็่กไธบ๏ผ่ฟๅ ๅ
ฅไบCOIG-CQIAใalpaca-gpt4-data-zh็ญไธญๆๆฐๆฎ้่ฟไธๆญฅๅขๅผบไธญๆ่ฝๅใ
# How to use
Gigi-Llama-3-8B-zh follows the dialogue template of Llama-3-8B-Instruct, using `<|end_of_text|>` as the pad token.
Gigi-Llama-3-8B-zh ้ตๅพช Llama-3-8B-Instruct ็ๅฏน่ฏๆจกๆฟ๏ผpad token ไฝฟ็จ `<|end_of_text|>`ใ
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_msg_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|>
```
You can use the following code to load the model for inference. For more efficient inference, it is recommended to use vLLM. We will introduce the specific performance of the model later, and will soon update to a larger parameter and better performance fine-tuned version.
ๆจๅฏไปฅไฝฟ็จไธ้ขไปฃ็ ๅ ่ฝฝๆจกๅๆจ็๏ผๅฏนไบๆด้ซๆ็ๆจ็ๅปบ่ฎฎไฝฟ็จvLLM๏ผๆไปฌ้ๅไผไป็ปๆจกๅ็ๅ
ทไฝๆง่ฝ๏ผๅนถๅพๅฟซๆดๆฐๆดๅคงๅๆฐๅๆง่ฝๆดๅฅฝ็็ฒพ่ฐ็ๆฌใ
```python
import torch
from transformers import PreTrainedTokenizerFast, AutoModelForCausalLM
from peft import PeftModel
from torch.nn.functional import softmax
device = "cuda"
model_id = "yaojialzc/Gigi-Llama-3-8B-zh"
tokenizer = PreTrainedTokenizerFast.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype=torch.bfloat16)
messages = [
{"role": "system", "content": "ไฝ ๆฏไธไธชAIๅฉๆใ"},
{"role": "user", "content": "ๆๆๆๅไธไฝ็ๅธๆฏ่ฐ๏ผๅ็ญไป็ๅๅญ๏ผ็ถๅๅๆญข่พๅบ"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
input_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt").to(device)
output = model.generate(
input_ids,
do_sample=True,
temperature=0.01,
top_k=50,
top_p=0.7,
repetition_penalty=1,
max_length=128,
pad_token_id=tokenizer.eos_token_id,
)
output = tokenizer.decode(output[0], skip_special_tokens=False)
print(output)
```
The model output of llama 3 does not stop at eot, so it cannot be used out of the box. For the time being, we respect the official behavior and guide the model to output "end_of_text" directly at the end of fine-tuning, making it convenient for immediate fine-tuning in downstream fields.
llama 3 ๆจกๅ่พๅบ eot ๆถไธไผๅๆญข๏ผๆ ๆณๅผ็ฎฑๅณ็จใๆไปฌๆๆถๅฐ้ๅฎๆน็่กไธบ๏ผ็ฒพ่ฐๆถๆๅฏผๆจกๅๅจๆๅ็ดๆฅ่พๅบ end_of_text๏ผๆนไพฟ็ฎๅๅผ็ฎฑๅณ็จๅฐๅจไธๆธธ้ขๅ็ฒพ่ฐใ |
giannisan/mistral-imagebind-gguf | giannisan | 2024-05-27T02:09:19Z | 1,376 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-05-27T02:04:05Z | Entry not found |
advanced-stack/NuExtract-GGUF | advanced-stack | 2024-06-27T12:49:59Z | 1,376 | 0 | null | [
"gguf",
"license:mit",
"region:us"
] | null | 2024-06-24T21:09:59Z | ---
license: mit
---
Tutorial (5 min read): [Use NuExtract to parse unstructured text locally in less than 5 min](https://advanced-stack.com/resources/how-to-parse-unstructured-text-with-nuextract.html)
Original model card: https://huggingface.co/numind/NuExtract |
ndsolo/llama3-8b-cosmic-fusion-dynamics-f16-gguf | ndsolo | 2024-07-01T03:46:05Z | 1,376 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T03:34:43Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** ndsolo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Helsinki-NLP/opus-mt-tc-big-en-fr | Helsinki-NLP | 2023-10-10T10:25:37Z | 1,375 | 4 | transformers | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"en",
"fr",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2022-04-13T14:07:14Z | ---
language:
- en
- fr
tags:
- translation
- opus-mt-tc
license: cc-by-4.0
model-index:
- name: opus-mt-tc-big-en-fr
results:
- task:
name: Translation eng-fra
type: translation
args: eng-fra
dataset:
name: flores101-devtest
type: flores_101
args: eng fra devtest
metrics:
- name: BLEU
type: bleu
value: 52.2
- task:
name: Translation eng-fra
type: translation
args: eng-fra
dataset:
name: multi30k_test_2016_flickr
type: multi30k-2016_flickr
args: eng-fra
metrics:
- name: BLEU
type: bleu
value: 52.4
- task:
name: Translation eng-fra
type: translation
args: eng-fra
dataset:
name: multi30k_test_2017_flickr
type: multi30k-2017_flickr
args: eng-fra
metrics:
- name: BLEU
type: bleu
value: 52.8
- task:
name: Translation eng-fra
type: translation
args: eng-fra
dataset:
name: multi30k_test_2017_mscoco
type: multi30k-2017_mscoco
args: eng-fra
metrics:
- name: BLEU
type: bleu
value: 54.7
- task:
name: Translation eng-fra
type: translation
args: eng-fra
dataset:
name: multi30k_test_2018_flickr
type: multi30k-2018_flickr
args: eng-fra
metrics:
- name: BLEU
type: bleu
value: 43.7
- task:
name: Translation eng-fra
type: translation
args: eng-fra
dataset:
name: news-test2008
type: news-test2008
args: eng-fra
metrics:
- name: BLEU
type: bleu
value: 27.6
- task:
name: Translation eng-fra
type: translation
args: eng-fra
dataset:
name: newsdiscussdev2015
type: newsdiscussdev2015
args: eng-fra
metrics:
- name: BLEU
type: bleu
value: 33.4
- task:
name: Translation eng-fra
type: translation
args: eng-fra
dataset:
name: newsdiscusstest2015
type: newsdiscusstest2015
args: eng-fra
metrics:
- name: BLEU
type: bleu
value: 40.3
- task:
name: Translation eng-fra
type: translation
args: eng-fra
dataset:
name: tatoeba-test-v2021-08-07
type: tatoeba_mt
args: eng-fra
metrics:
- name: BLEU
type: bleu
value: 53.2
- task:
name: Translation eng-fra
type: translation
args: eng-fra
dataset:
name: tico19-test
type: tico19-test
args: eng-fra
metrics:
- name: BLEU
type: bleu
value: 40.6
- task:
name: Translation eng-fra
type: translation
args: eng-fra
dataset:
name: newstest2009
type: wmt-2009-news
args: eng-fra
metrics:
- name: BLEU
type: bleu
value: 30.0
- task:
name: Translation eng-fra
type: translation
args: eng-fra
dataset:
name: newstest2010
type: wmt-2010-news
args: eng-fra
metrics:
- name: BLEU
type: bleu
value: 33.5
- task:
name: Translation eng-fra
type: translation
args: eng-fra
dataset:
name: newstest2011
type: wmt-2011-news
args: eng-fra
metrics:
- name: BLEU
type: bleu
value: 35.0
- task:
name: Translation eng-fra
type: translation
args: eng-fra
dataset:
name: newstest2012
type: wmt-2012-news
args: eng-fra
metrics:
- name: BLEU
type: bleu
value: 32.8
- task:
name: Translation eng-fra
type: translation
args: eng-fra
dataset:
name: newstest2013
type: wmt-2013-news
args: eng-fra
metrics:
- name: BLEU
type: bleu
value: 34.6
- task:
name: Translation eng-fra
type: translation
args: eng-fra
dataset:
name: newstest2014
type: wmt-2014-news
args: eng-fra
metrics:
- name: BLEU
type: bleu
value: 41.9
---
# opus-mt-tc-big-en-fr
Neural machine translation model for translating from English (en) to French (fr).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT โ Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge โ Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-09
* source language(s): eng
* target language(s): fra
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fra/opusTCv20210807+bt_transformer-big_2022-03-09.zip)
* more information released models: [OPUS-MT eng-fra README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-fra/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"The Portuguese teacher is very demanding.",
"When was your last hearing test?"
]
model_name = "pytorch-models/opus-mt-tc-big-en-fr"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Le professeur de portugais est trรจs exigeant.
# Quand a eu lieu votre dernier test auditif ?
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-fr")
print(pipe("The Portuguese teacher is very demanding."))
# expected output: Le professeur de portugais est trรจs exigeant.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fra/opusTCv20210807+bt_transformer-big_2022-03-09.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-fra/opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| eng-fra | tatoeba-test-v2021-08-07 | 0.69621 | 53.2 | 12681 | 106378 |
| eng-fra | flores101-devtest | 0.72494 | 52.2 | 1012 | 28343 |
| eng-fra | multi30k_test_2016_flickr | 0.72361 | 52.4 | 1000 | 13505 |
| eng-fra | multi30k_test_2017_flickr | 0.72826 | 52.8 | 1000 | 12118 |
| eng-fra | multi30k_test_2017_mscoco | 0.73547 | 54.7 | 461 | 5484 |
| eng-fra | multi30k_test_2018_flickr | 0.66723 | 43.7 | 1071 | 15867 |
| eng-fra | newsdiscussdev2015 | 0.60471 | 33.4 | 1500 | 27940 |
| eng-fra | newsdiscusstest2015 | 0.64915 | 40.3 | 1500 | 27975 |
| eng-fra | newssyscomb2009 | 0.58903 | 30.7 | 502 | 12331 |
| eng-fra | news-test2008 | 0.55516 | 27.6 | 2051 | 52685 |
| eng-fra | newstest2009 | 0.57907 | 30.0 | 2525 | 69263 |
| eng-fra | newstest2010 | 0.60156 | 33.5 | 2489 | 66022 |
| eng-fra | newstest2011 | 0.61632 | 35.0 | 3003 | 80626 |
| eng-fra | newstest2012 | 0.59736 | 32.8 | 3003 | 78011 |
| eng-fra | newstest2013 | 0.59700 | 34.6 | 3000 | 70037 |
| eng-fra | newstest2014 | 0.66686 | 41.9 | 3003 | 77306 |
| eng-fra | tico19-test | 0.63022 | 40.6 | 2100 | 64661 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Unionโs Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Unionโs Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 17:07:05 EEST 2022
* port machine: LM0-400-22516.local
|
fcakyon/yolov5s-v7.0 | fcakyon | 2022-12-20T09:51:11Z | 1,375 | 13 | transformers | [
"transformers",
"object-detection",
"computer-vision",
"vision",
"yolo",
"yolov5",
"dataset:detection-datasets/coco",
"license:gpl-3.0",
"region:us"
] | object-detection | 2022-12-13T21:26:21Z | ---
license: gpl-3.0
inference: false
tags:
- object-detection
- computer-vision
- vision
- yolo
- yolov5
datasets:
- detection-datasets/coco
---
### How to use
- Install yolov5:
```bash
pip install -U yolov5
```
- Load model and perform prediction:
```python
import yolov5
# load model
model = yolov5.load('fcakyon/yolov5s-v7.0')
# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model(img)
# inference with larger input size
results = model(img, size=640)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, y1, x2, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show detection bounding boxes on image
results.show()
# save results into "results/" folder
results.save(save_dir='results/')
```
- Finetune the model on your custom dataset:
```bash
yolov5 train --img 640 --batch 16 --weights fcakyon/yolov5s-v7.0 --epochs 10 --device cuda:0
``` |
beomi/KoAlpaca-KoRWKV-6B | beomi | 2023-09-15T01:27:53Z | 1,375 | 7 | transformers | [
"transformers",
"pytorch",
"safetensors",
"rwkv",
"text-generation",
"generated_from_trainer",
"KoRWKV",
"KoAlpaca",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"base_model:beomi/KoRWKV-6B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-02T12:00:13Z | ---
language:
- ko
license: apache-2.0
library_name: transformers
tags:
- generated_from_trainer
- KoRWKV
- KoAlpaca
datasets:
- beomi/KoAlpaca-v1.1a
pipeline_tag: text-generation
base_model: beomi/KoRWKV-6B
model-index:
- name: KoRWKV-6B-koalpaca-v1.1a
results: []
---
> ๐ง Note: this repo is under construction, current uploaded version is based on KoRWKV-6B, ~28Billion tokens trained ckpt ๐ง
# ๐ง KoAlpaca-KoRWKV-6B (v1.1a)
This model is a fine-tuned version of [beomi/KoRWKV-6B](https://huggingface.co/beomi/KoRWKV-6B) on an [KoAlpaca v1.1a Dataset](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a).
Detail Codes are available at [KoAlpaca Github Repository](https://github.com/Beomi/KoAlpaca)
## Generation Code
### Install Requirements
```bash
pip install -U transformers tokenizers accelerate safetensors
```
### Generation Code with HF pipeline
```python
from transformers import pipeline
pipe = pipeline('text-generation', model='beomi/KoAlpaca-KoRWKV-6B', device=0)
def gen(x):
a = pipe(
f'### ์ง๋ฌธ: {x}\n\n### ๋ต๋ณ:',
max_new_tokens=1024,
return_full_text=False,
do_sample=True,
top_p=0.9,
temperature=0.7,
early_stopping=True,
repetition_penalty=1.2,
)
print(a[0]['generated_text'])
```
### Generated Samples
```
gen("์ค๋ ๊ธฐ๋ถ์ ์ข ์ด๋?")
์ค๋์ ๊ธฐ๋ถ์ด ์ข์ ๋ ์
๋๋ค. ์๋ํ๋ฉด, ์ค๋์ ๊ธฐ์ ์ผ์ด ์๊ธธ ์ ์๋ ๋ ์ด๋๊น์!
1. ์์นจ์ ์ผ์ฐ ์ผ์ด๋์ ๊ธฐ๋ถ ์ข์์ต๋๋ค. 2. ๋ง์๋ ์์์ ๋จน์ด์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 3. ์์
์๊ฐ์ ์ด์ฌํ ๋ค์ด์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 4. ์น๊ตฌ๊ฐ ์์ผ ์ ๋ฌผ์ ์ค์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 5. ๋ด์ผ์ด ํ ์์ผ์ด๋ผ์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 6. ์์ ๋ฅผ ํด์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 7. ๋ถ๋ชจ๋๊ป์ ์์ผ ์ถํ ๋
ธ๋๋ฅผ ๋ถ๋ฌ์ค์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 8. ๋ฐฉ ์ฒญ์๋ฅผ ํด์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 9. ๋ง์๋ ๊ณผ์๋ฅผ ๋จน์ด์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 10. ์ค๋ ๋ ์จ๋ฅผ ๋ณด๋ ๋น๊ฐ ์์์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 11. ๋ถ๋ชจ๋์ด ๋๋ฅผ ์ฌ๋ํด์ฃผ์
์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 12. ๋ถ๋ชจ๋๊ณผ ํจ๊ป ์ธ์์ ํด์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 13. ์น๊ตฌ๋ ๊ฐ์ด ๋์์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 14. ๊ณต๋ถ๋ฅผ ํด์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 15. ์์ ๋ฅผ ์ ํด์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 16. ๋ง์๋ ์์์ ๋จน์ด์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 17. ์น๊ตฌ๋ค๊ณผ ์ด์ธ๋ ค ๋์์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 18. ๋ด๊ฐ ์ข์ํ๋ ์ฑ
์ ์ฝ์ด์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 19. ์ค๋ ํ๋ฃจ ์ข
์ผ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค. 20. ์๋ง์ ์๋น ๊ฐ ๋๋ฅผ ์ฌ๋ํด ์ฃผ์
์ ๊ธฐ๋ถ์ด ์ข์ต๋๋ค.
```
```
gen("์นธํธ์ ์ ์ธ๋ช
๋ น์ ๋ฐ๋ฅธ๋ค๋ ์ ์ด ์ผ๋ฐ์ ์ธ ์ ํจ๊ณผ ๋ค๋ฅธ ์ด์ ๊ฐ ๋ญ์ผ?")
์นธํธ๋ ๋๋์ ๋ฒ์น์ ๋ ๊ฐ์ง๋ก ๊ตฌ๋ถํ์ต๋๋ค. ์ฒซ ๋ฒ์งธ ๋ฒ์น์ '์ ์ธ๋ช
๋ น'์ผ๋ก, ์ด ๋ฒ์น์ ์ด๋ค ์ํฉ์์๋ ์ ๋์ ์ผ๋ก ์ง์ผ์ผ ํ๋ ๋ณดํธ์ ์ธ ๋ฒ์น์
๋๋ค. ์ด์ ๋ฐ๋ผ ์นธํธ๋ "์ด์ฑ์ด ๋ช
๋ นํ ๋๋ก ํ๋ํ๋ผ"๋ ์์น์ ์ ์ํ์ต๋๋ค.
๋ ๋ฒ์งธ ๋ฒ์น์ "๋ชจ๋ ์ธ๊ฐ์ ํ๋ฑํ๊ฒ ํ์ด๋ฌ๊ณ , ์ฐฝ์กฐ์ฃผ๋ก๋ถํฐ ์๋ํ ์ ์๋ ๊ถ๋ฆฌ๋ฅผ ๋ถ์ฌ๋ฐ์๋ค"๋ ๊ฒ์
๋๋ค. ์ด ์์น์๋ ๋ชจ๋ ์ธ๊ฐ์ด ํ๋ฑํ๊ณ , ๋ชจ๋ ์ธ๊ฐ์ ๋๋ฑํ๊ฒ ์กด์ค๋ฐ์์ผ ํ๋ค๋ ์๋ฏธ๊ฐ ๋ด๊ฒจ ์์ต๋๋ค. ์ด๋ฌํ ์นธํธ์ ๋๋ ๋ฒ์น์ ๋ฐ๋ฅด๋ฉด, ์ ํจ๊ณผ ์
ํจ์ ์ํฉ์ ๋ฐ๋ผ ๋ค๋ฅด๊ฒ ์ ์ฉ๋ ์ ์์ต๋๋ค. ์๋ฅผ ๋ค์ด, ๋ฒ์ฃ์๋ ์ฅ์ ์ธ ๋ฑ ํน๋ณํ ์ํฉ์์๋ ์ ์ด ๋ ์ ์๋ ์ํฉ์ด ์กด์ฌํฉ๋๋ค.
```
```
gen("๋ค์ ๊ธ์ ์์ด๋ก ๋ฒ์ญํด์ฃผ์ธ์. ์์ด ๋ฌธ์ฅ: 'She looks handsome.' ๋ฒ์ญ ํ๊ธ ๋ฌธ์ฅ:")
"She looks handsome."์ ์ง์ญํ๋ฉด "๊ทธ๋
๋ ์์๊ฒจ ๋ณด์
๋๋ค"๋ผ๋ ๋ป์ด์ง๋ง, ํ๊ตญ์ด๋ก๋ "๊ทธ๋
๋ ์๋ฆ๋ค์ ๋ณด์
๋๋ค"๋ก ํด์๋ฉ๋๋ค. ์ด ๋ฌธ์ฅ์ ์๋ 'The saddest thing'์ด๋ผ๋ ์ํ์ OST ๊ฐ์ฌ ์ค ์ผ๋ถ๋ถ์
๋๋ค. ์ด ์ํ๋ ์ ์ธ ํฐ๋ค๊ฐ ์ฃผ์ฐ์ ๋งก์ ์ํ์ผ๋ก, ๊ทธ๋
๊ฐ ๋งค๋ ฅ์ ์ธ ๋ฏธ๋ชจ์ ๋ชธ๋งค๋ฅผ ๊ฐ์ง๊ณ ์์ง๋ง ๋ถํํ ์ถ์ ์ฌ๋ ์ฌ์ฑ์ ๋ชจ์ต์ ๊ทธ๋ฆฌ๊ณ ์์ต๋๋ค. ๋ฐ๋ผ์ ๊ทธ๋
๋ ์๋ฆ๋ต์ง๋ง ๋ถํํด๋ณด์ด๋ ๋ชจ์ต์ด๋ผ๋ ์๋ฏธ์์ ์ฌ์ฉ๋์์ต๋๋ค.
```
```
gen("์์ธ์ด ์ธ๊ธฐ์๋ ๋์์ธ ์ด์ ๋?")
์์ธ์ ๋ง์ ์ฌ๋๋ค์ด ๊ฑฐ์ฃผํ๊ณ ์๋ ๋๋์์ด๊ธฐ ๋๋ฌธ์
๋๋ค. ๋ํ, ๋ค๋ฅธ ๋๋ผ์์๋ ์์ธ๋ณด๋ค ๋ ํฐ ๋๋์๋ค์ด ๋ง์ด ์์ง๋ง, ์๋๊ฐ ์์ธ์ ์์นํ ๋๋ผ๋ ์์ต๋๋ค. ์ด๋ ์์ธ์ ๋์ ๊ท๋ชจ๊ฐ ํฌ๊ณ ๊ตํต๊ณผ ํต์ ๋ฑ ์ธํ๋ผ๊ฐ ์ ๊ตฌ์ถ๋์ด ์๊ธฐ ๋๋ฌธ์
๋๋ค.
์ถ๊ฐ ์ค๋ช
:
์์ธ์ ์ธ๊ตฌ๋ ์ ๋์ธ๊ตฌ ๋ฑ์ ์ดํด๋ณด๋ฉด, ์์ธ์ ๋ฐฉ๋ฌธํ๋ ์ธ๊ตญ์ธ ๊ด๊ด๊ฐ๋ ๋งค์ฐ ๋ง์ต๋๋ค. ์ด๋ฌํ ์ด์ ๋ก ์์ธ์ ํ๊ตญ ๋ด์์ ๊ฐ์ฅ ์ธ๊ธฐ ์๋ ๋์ ์ค ํ๋๋ก ์๋ฆฌ์ก๊ณ ์์ต๋๋ค.
```
```
gen("์นธํธ ์ฒ ํ์ ์์๋ฅผ ๋ค๊ณ , ํด๋น ์ฒ ํ์ด ๊ฐ์ง๋ ์์๋ฅผ 3๊ฐ์ง ์๋ ค์ค.")
์นธํธ๋ ์ธ๊ฐ์ ์ง์์ ๊ฒฝํ๋ก ์ ์
์ฅ์์ ๋ถ์ํ๊ณ ์ข
ํฉํ์ฌ ๋๋์ฒ ํ์ ์ ๋ฆฝํ์์ต๋๋ค. ์ด ๊ณผ์ ์์ ๊ทธ๋ ๋ชจ๋ ์ธ๊ฐ์ ์ด์ฑ์ ๊ฐ์ง๊ณ ์์ผ๋ฉฐ, ๋๋๋ฒ์น์ ์ธ์ํ ์ ์๋ค๊ณ ๋ณด์์ต๋๋ค. ์ด๋ฌํ ์ธ๊ฐ ์ด์ฑ์ ๊ธฐ๋ฅ์ ํตํด, ์นธํธ๋ '์ค์ฒ์ด์ฑ'๊ณผ 'ํ๋จ๋ ฅ'์ ์ค์ํฉ๋๋ค. ์ฆ, ์ด์ฑ์ ์ผ๋ก ํ๋จํ์ฌ ์ฌ๋ฐ๋ฅธ ํ๋จ์ ๋ด๋ฆด ์ ์๋ ๋ฅ๋ ฅ์ด ์ธ๊ฐ์๊ฒ๋ ์๋ค๋ ๊ฒ์
๋๋ค. ๋ํ, ์นธํธ๋ ์ธ๊ฐ์ด ๊ฐ์ง ์ ์ฒ์ ์ธ ๊ด๋
๋ก ์ ์ฌ๊ณ ๋ฐฉ์์ ๊ฑฐ๋ถํ๊ณ , ๊ฒฝํ์ ํตํ ์ค์ฒ์ ์ธ ์ฌ๊ณ ๋ฐฉ์์ผ๋ก์์ ์ด์ฑ์ ๊ฐ์กฐํฉ๋๋ค.
์ถ๊ฐ ์ค๋ช
:
์นธํธ์ ์ฒ ํ์ ํ์ด์ํ, ์ธ์๋ก , ์ค๋ฆฌํ, ์ ์นํ ๋ฑ ๋ค์ํ ๋ถ์ผ์์ ์ํฅ์ ๋ฏธ์น๊ณ ์์ต๋๋ค. ํนํ, ๊ทธ๋ ๊ฒฝํ๋ก ๊ณผ ํฉ๋ฆฌ๋ก ์ ๋๋ฆฝ์ผ๋ก ์ธํด ๋ถ์ด๋ ์ ๋ฝ์ ์ฌ์๊ณ๋ฅผ ํตํฉํ์ฌ ์๋ก์ด ์ฒ ํ์ ์ ์ํ๊ณ ์ ๋
ธ๋ ฅํ์์ต๋๋ค. ์ด๋ฅผ ํตํด ๊ทผ๋ ์ฒ ํ์ ํฐ ์ํฅ์ ์ฃผ์์ผ๋ฉฐ, ํ์ฌ๋ ๊ทธ ์ํฅ๋ ฅ์ด ์ฌ์ ํ ์ ์ง๋๊ณ ์์ต๋๋ค.
```
```
gen("๋ฆฌํฌ์ด์จ์ ์ง์์ ๋ฐ์ํ๋ ํํ ํ์์ ํํ์์ ๊ธฐ๋ฐ์ผ๋ก ์์ธํ ์ค๋ช
ํด์ฃผ์ธ์.")
๋ฆฌํฌ์ด์จ์ ์ง๋ ์๊ทน๊ณผ ์๊ทน ์ฌ์ด์ ์๋ ๋ถ๋ฆฌ๋ง์ ํตํด ์ด์จ์ด ์ด๋ํ๋ฉด์ ์ถฉ์ ๊ณผ ๋ฐฉ์ ์ ํ๋ ์๋ฆฌ๋ก ์๋ํฉ๋๋ค. ๋ฐ๋ผ์, ์ ๊ทน์ ๋ฐ์์ ์ํด ํํ ๋ฐ์์ด ์ผ์ด๋ฉ๋๋ค.
1. ๋ฆฌํฌ์ด์จ์ ์ง ๋ด๋ถ์๋ ๋ฆฌํฌ์ด์จ๋ค์ด ์ด๋ํ๊ธฐ ์ํ ํต๋ก๊ฐ ์๋๋ฐ, ์ด๋ฅผ ์ ํด์ง์ด๋ผ๊ณ ํฉ๋๋ค. ์ฆ, ์ก์ฒด ์ ํด์ง์ ์ฑ์ฐ๊ณ ์๋ ์ด์จ๋ค์ ๋๋ ์ฐจ์ด ๋๋ฌธ์ ์ ๊ทน์์ ์ ์๊ฐ ์ด๋ํ๋ ๊ฒ์
๋๋ค. ์ด๋ ์ ํด์ง์ ๋ฌผ์ง์ด ์๋ ํํ๋ฐ์์ ์ผ์ผํค๋ ์ญํ ์ ํฉ๋๋ค. 2. ์๊ทน์ ์ ์๋ฅผ ๋ฐ์ ์ด์จ์ผ๋ก ์์ฑํ๋ ๋ถ๋ถ์
๋๋ค. 3. ์๊ทน์์๋ ์ฐํ ํ์ ๋ฐ์์ ํตํด ์ ์์ ์์ ์ด์จ์ด ์ด๋ํ๋ฉฐ, ์ด๋ฅผ ํตํด ์ ๊ธฐ๊ฐ ๋ฐ์ํฉ๋๋ค. 4. ์ ์๋ ์๊ทน์์ ์๊ทน์ผ๋ก ์ด๋ํ๋ฉฐ, ์ด๋ ์ ์์ ํ๋ฆ์ด๋ฉฐ, ์ด๋ฅผ ํตํด ์ ๋ฅ๊ฐ ํ๋ฅด๊ฒ ๋ฉ๋๋ค. 5. ์ ์๋ ์๊ทน์์ ์๊ทน์ผ๋ก ์ด๋ํ๋๋ฐ, ์ด๋ ์ ์๊ฐ ์ ์์ด๋์ ํ๋ฉด์ ๋ฐ์ํ ์ด๋์๋์ง๋ก ์ธํด ๋ฐ์ํฉ๋๋ค. 6. ์ ๊ธฐ์ ์ผ๋ก ๊ฐ์ฅ ์์ ๋ ์ํ์ธ ๋ฆฌํฌ์ด์จ์ ๊ฐ์ฅ ๋์ ์จ๋(์ฝ 700๋)์์๋ ์ ๊ฒฌ๋ ์ ์์ต๋๋ค.
```
```
gen("ํฌ์ ์ด์ฌ์๊ฐ ํ๊ธฐ ์ข์ ์ ๋ฌผ ํฌ์ ์ข
๋ฅ๋ฅผ ์๋ ค์ฃผ์ธ์.")
์ฃผ์, ์ฑ๊ถ, ๊ธ, ์์ ๋ฑ ๋ค์ํ ์ํ์ด ์์ต๋๋ค.
1. ์ฃผ์ (์ฃผ์ํฌ์) - ์ฅ๊ธฐ์ ์ธ ๊ด์ ์์ ํฌ์ํ ๋ ์ ํฉํ ์ํ์
๋๋ค.
2. ๊ธ (๊ธํฌ์) - ์์ ํ ํฌ์๋ฅผ ์ํ๋ ์ฌ๋์๊ฒ ์ข์ต๋๋ค.
3. ์์ (์์์ฌํฌ์) - ์์ ์ ์ธ ์์ต๊ณผ ๊ฐ๊ฒฉ ๋ณ๋์ฑ์ด ์ ์ต๋๋ค.
4. ๋ถ๋์ฐ (๋ถ๋์ฐํฌ์) - ์ธํ๋ ์ด์
์ด ๋ฐ์ํ๋ ์๊ธฐ์ ๊ฐ์ฅ ์ ํฉํ ํฌ์ ๋ฐฉ๋ฒ ์ค ํ๋์
๋๋ค.
5. ๋ณดํ (๋ณดํํฌ์) - ๋ณด์ฅ์์ฐ์ ๋ํ ๋์ฆ๊ฐ ์๋ ์ฌ๋์๊ฒ ์ ํฉํ ์ํ์
๋๋ค.
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
- Trained on 1x H100(80G PCI-E) GPU
### Framework versions
- Transformers 4.29.2
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.13.3 |
Expert68/llama2_13b_instructed_version2 | Expert68 | 2023-10-15T10:06:39Z | 1,375 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-14T02:27:16Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
inference: false
license: apache-2.0
---
# Model Card
## Training Dataset
` llama2_13b_instructed ` is trained on multiple datasets:
- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
- [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
- [GPT-4 Generated Data (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
- [UltraChat (en)](https://github.com/thunlp/UltraChat) |
NeverSleep/Mistral-11B-SynthIAirOmniMix | NeverSleep | 2023-11-17T21:07:47Z | 1,375 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-14T14:34:58Z | ---
license: cc-by-nc-4.0
---
Replaced Zephyr by Airoboros 2.2 and OpenOrca by SynthIA in the mix, the reason why is to see if using merged Mistral models using all the same prompt format would be a better step or not.
## Description
This repo contains fp16 files of Mistral-11B-SynthIAirOmniMix.
## Model used
- [SynthIA-7B-v1.5](https://huggingface.co/migtissera/SynthIA-7B-v1.5)
- [Mistral-7B-v0.1-Open-Platypus](https://huggingface.co/akjindal53244/Mistral-7B-v0.1-Open-Platypus)
- [CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B)
- [airoboros-mistral2.2-7b](https://huggingface.co/teknium/airoboros-mistral2.2-7b)
## Prompt template
3 out of 4 models use the same prompting format in this merge.
The best one should be this one, since Zephyr and OpenOrca is out of the merge:
```
(SYSTEM: {context}) - Not mandatory
USER: {prompt}
ASSISTANT:
```
But this one (maybe) work too:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## The secret sauce
Mistral-11B-SynthIAOpenPlatypus :
```
slices:
- sources:
- model: "/content/drive/MyDrive/SynthIA-7B-v1.5-bf16"
layer_range: [0, 24]
- sources:
- model: akjindal53244/Mistral-7B-v0.1-Open-Platypus
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Mistral-11B-CC-Airo :
```
slices:
- sources:
- model: "/content/drive/MyDrive/CC-v1.1-7B-bf16"
layer_range: [0, 24]
- sources:
- model: "/content/drive/MyDrive/Mistral-7B-Airoboros-2.2-bf16"
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Mistral-11B-SynthIAirOmniMix :
```
slices:
- sources:
- model: Mistral-11B-SynthIAOpenPlatypus
layer_range: [0, 48]
- model: Mistral-11B-CC-Airo
layer_range: [0, 48]
merge_method: slerp
base_model: Mistral-11B-OpenOrcaPlatypus
parameters:
t:
- filter: lm_head
value: [0.75]
- filter: embed_tokens
value: [0.75]
- filter: self_attn
value: [0.75, 0.25]
- filter: mlp
value: [0.25, 0.75]
- filter: layernorm
value: [0.5, 0.5]
- filter: modelnorm
value: [0.75]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here.
## Some scoring I done myself

| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5410|ยฑ |0.0146|
| | |acc_norm|0.5640|ยฑ |0.0145|
|arc_easy | 0|acc |0.8228|ยฑ |0.0078|
| | |acc_norm|0.8068|ยฑ |0.0081|
|hellaswag | 0|acc |0.6274|ยฑ |0.0048|
| | |acc_norm|0.8167|ยฑ |0.0039|
|piqa | 0|acc |0.8052|ยฑ |0.0092|
| | |acc_norm|0.8232|ยฑ |0.0089|
|truthfulqa_mc| 1|mc1 |0.3905|ยฑ |0.0171|
| | |mc2 |0.5592|ยฑ |0.0155|
|winogrande | 0|acc |0.7364|ยฑ |0.0124|
## Others
Special thanks to Sushi, [Henky](https://github.com/KoboldAI/KoboldAI-Client) for the machine he give me for big task, and [Charles Goddard](https://github.com/cg123) for his amazing tool.
If you want to support me, you can [here](https://ko-fi.com/undiai).
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_NeverSleep__Mistral-11B-SynthIAirOmniMix)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 54.56 |
| ARC (25-shot) | 62.46 |
| HellaSwag (10-shot) | 83.13 |
| MMLU (5-shot) | 63.47 |
| TruthfulQA (0-shot) | 55.69 |
| Winogrande (5-shot) | 76.4 |
| GSM8K (5-shot) | 11.9 |
| DROP (3-shot) | 28.88 |
|
Korabbit/Llama-2-7b-chat-hf-afr-200step-v2 | Korabbit | 2023-11-22T11:07:17Z | 1,375 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:vicgalle/alpaca-gpt4",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-22T08:55:34Z | ---
license: llama2
datasets:
- vicgalle/alpaca-gpt4
language:
- en
---
Hi there! this test my approach "AFR training"
The base model used "Llama-2-7b-chat"
System Prompt: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. \n If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
Instruction: Implement binary search in python.
Response: Sure! Here is an example of how to implement a binary search algorithm in Python:
```
def binary_search(arr, target):
# Initialize the index of the target element
low = 0
high = len(arr) - 1
# Loop until the target element is found or the end of the array is reached
while low <= high:
# Calculate the middle index of the array
mid = (low + high) // 2
# Check if the target element is in the middle index
if arr[mid] == target:
return mid
# If the target element is not in the middle index, check if it is less than the middle index
if arr[mid] > target:
low = mid + 1
else:
high = mid - 1
# If the target element is not found in the array, return -1
return -1
# Test the binary search function
arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
target = 5
result = binary_search(arr, target)
print(f"The target element {target} is at index {result}")
```
This code defines a `binary_search` function that takes an array and a target element as input, and returns the index of the target element in the array if it is found, or -1 if it is not found.
The function works by iteratively dividing the range of the array into two sub-ranges, one containing elements less than the target, and one containing elements greater than the target. It then checks the middle index of each sub-range to see if the target element is present. If the target element is found, the function returns the index of the middle index. If the target element is not found, the function returns -1.
The code also includes a test case to demonstrate how to use the `binary_search` function. In this case, the target element is 5, and the function returns the index of the element 5 in the array.
I hope this helps! Let me know if you have any questions. |
S4sch/zephyr-neural-chat-frankenmerge11b | S4sch | 2023-12-02T19:07:39Z | 1,375 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-28T21:58:49Z | ---
license: apache-2.0
---
Frankenmerge 11b between HuggingFaceH4/zephyr-7b-beta and Intel/neural-chat-7b-v3-1
Merge with the following conditions (via mergekit on github)
model: Intel/neural-chat-7b-v3-1
layer_range: [0, 8]
model: HuggingFaceH4/zephyr-7b-beta
layer_range: [4, 12]
model: Intel/neural-chat-7b-v3-1
layer_range: [9, 16]
model: HuggingFaceH4/zephyr-7b-beta
layer_range: [13, 20]
model: Intel/neural-chat-7b-v3-1
layer_range: [17, 24]
model: HuggingFaceH4/zephyr-7b-beta
layer_range: [21, 28]
model: Intel/neural-chat-7b-v3-1
layer_range: [25, 32]
merge_method: passthrough
|
Korabbit/Llama-2-7b-chat-hf-afr-300step-flan-v2 | Korabbit | 2023-12-04T12:41:03Z | 1,375 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:sordonia/flan-10k-flat",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-03T11:20:38Z | ---
license: llama2
datasets:
- sordonia/flan-10k-flat
language:
- en
---
Hi there! this is a test of my "AFR training" approach
The base model used "Llama-2-7b-chat"
System Prompt: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. \n If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
Instruction: Implement binary search in python.
Response: Sure! Here is an example of how to implement a binary search algorithm in Python:
```
def binary_search(arr, target):
# Initialize the index of the target element
low = 0
high = len(arr) - 1
# Loop until the target element is found or the end of the array is reached
while low <= high:
# Calculate the middle index of the array
mid = (low + high) // 2
# Check if the target element is in the middle index
if arr[mid] == target:
return mid
# If the target element is not in the middle index, check if it is less than the middle index
if arr[mid] > target:
low = mid + 1
else:
high = mid - 1
# If the target element is not found in the array, return -1
return -1
# Test the binary search function
arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
target = 5
result = binary_search(arr, target)
print(f"The target element {target} is at index {result}")
```
This code defines a `binary_search` function that takes an array and a target element as input, and returns the index of the target element in the array if it is found, or -1 if it is not found.
The function works by iteratively dividing the range of the array into two sub-ranges, one containing elements less than the target, and one containing elements greater than the target. It then checks the middle index of each sub-range to see if the target element is present. If the target element is found, the function returns the index of the middle index. If the target element is not found, the function returns -1.
The code also includes a test case to demonstrate how to use the `binary_search` function. In this case, the target element is 5, and the function returns the index of the element 5 in the array.
I hope this helps! Let me know if you have any questions. |
vihangd/dopeyshearedplats-2.7b-v1 | vihangd | 2023-12-16T13:11:27Z | 1,375 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-16T13:02:51Z | ---
license: llama2
---
<p><h1> DopeyShearedPlats-2.7b </h1></p>
An experimental DPO finetune of shearedplats 2.7b with Alpaca-QLoRA
<h2> Datasets </h2>
Trained on alpca style datasets
<p><h2> Prompt Template </h2></p>
Uses alpaca style prompt template
<br><br> |
jjourney1125/M-SOLAR-10.7B-v1.0 | jjourney1125 | 2023-12-27T17:20:06Z | 1,375 | 8 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-25T03:57:17Z | ---
license: cc-by-nc-sa-4.0
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
---
## ์ด ๋ชจ๋ธ์ ๋ฉ๊ฐ์คํฐ๋๊ต์ก(์ฃผ)๊ณผ (์ฃผ)ํ๋ฆฌ๋์
(์ฃผ)๋ง์ด์ค์ ๊ณต๋์ฐ๊ตฌ๋ก ๊ฐ๋ฐ๋ ๋ชจ๋ธ์
๋๋ค.
## ์ฌ์ฉ ๋ฐ์ดํฐ์
& Strategy
Instruction Fine-Tuning ๋ฐ์ดํฐ์
์ ๊ตฌ์ถํ๊ธฐ ์ํด ๋ค์ํ ์๋๋ฅผ ์งํํ์์ต๋๋ค.
ํ์ฌ ์ฌ์ฉํ ๋ฐฉ๋ฒ๋ค์ ๋ํด์ ์ ๋๋ก ๋ ablation study๊ฐ ์งํ๋์ง ์์๊ธฐ ๋๋ฌธ์, ์์ด๋์ด ์์ฃผ๋ก ์ฐธ๊ณ ํด์ฃผ์๊ธธ ๋ถํ๋๋ฆฝ๋๋ค :)
### 1. [kyujinpy/KOR-OpenOrca-Platypus-v3 ๐](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3)
### 2. [KorQuAD 2.1](korquad.github.io) : Training ๋ฐ์ดํฐ ์ค ์ผ๋ถ๋ฅผ ์ถ์ถํ์ฌ ์ฌ์ฉํ์์ต๋๋ค.
### 3. AIHUB ๊ธฐ์ ๊ณผํ ์์ฝ ๋ฐ์ดํฐ
- AIHUB ๊ธฐ์ ๊ณผํ ์์ฝ ๋ฐ์ดํฐ์ `context`, `summary`, `clue` ๊ฐ์ ํ์ฉํ์ฌ Instruction์ ํฌํจํ ๋ฌธ์ฅ ์ ์ถ ๋ฐ์ดํฐ์
์ ๋ง๋ค์ด ํ์ฉํ์ต๋๋ค.
- **์๋ณธ ๋ฐ์ดํฐ**
```json
{'context': '<h1>์ฌ๋ฃ ๋ฐ ๋ฐฉ๋ฒ</h1> <h2>์คํ ๋๋ฌผ</h2> <p>๋ณธ ์ฐ๊ตฌ๋ 4์ฃผ๋ น์ C57BL/6 ์ค์ผ๋ก ์์ปท ํด ์ด 20๋ง๋ฆฌ๋ฅผ (์ฃผ)์ค์์คํ๋ฅ๋ฌผ์์ ๋ถ์ ๋ฐ์ 1์ฃผ๊ฐ์ ์คํ์ค ์ ์๊ธฐ๋ฅผ ๊ฑฐ์น ํ, ๋น๋ง์ ์ ๋ํ๊ธฐ ์ํด 5์ฃผ๊ฐ ๊ณ ์ง๋ฐฉ์์ด๋ฅผ ์์ ์ญ์ทจํ ๋ก ํ์๋ค. ...',
'summary': '20๋ง๋ฆฌ์ C57BL/6 ์ฅ๋ฅผ ์ฌ์ฉํ์ฌ ๋น๋ง์ ์ ๋ํ ๋ค, ํต์ ๊ตฐ๊ณผ ์ด๋๊ตฐ์ผ๋ก ๋๋์ด 8์ฃผ ๋์ ๋ค๋ฅธ ์น๋ฃ๋ฅผ ์ํํ ์ฐ๊ตฌ์ด๋ค. ...',
'clue': [{'clue_text': '๋ณธ ์ฐ๊ตฌ๋ 4์ฃผ๋ น์ C57BL/6 ์ค์ผ๋ก ์์ปท ํด ์ด 20๋ง๋ฆฌ๋ฅผ (์ฃผ)์ค์์คํ๋ฅ๋ฌผ์์ ๋ถ์ ๋ฐ์ 1์ฃผ๊ฐ์ ์คํ์ค ์ ์๊ธฐ๋ฅผ ๊ฑฐ์น ํ, ๋น๋ง์ ์ ๋ํ๊ธฐ ์ํด 5์ฃผ๊ฐ ๊ณ ์ง๋ฐฉ์์ด๋ฅผ ์์ ์ญ์ทจํ ๋ก ํ์๋ค.',
'clue_start': '35',
'clue_end': '144'},
{'clue_text': '๋น๋ง ์ ๋ ํ ํต์ ๊ตฐ(HLC, High fat diet for 5 weeks and low fat diet 8 weeks as the control) 10๋ง๋ฆฌ์ ์ด๋๊ตฐ(HLE, High fat diet for 5 weeks and low fat diet 8weeks with exercise) 10๋ง๋ฆฌ๋ก ๋ฌด์์ ๋ฐฐ์นํ ํ 8์ฃผ๊ฐ์ ์ด๋ ์ฒ์น๋ฅผ ์ค์ํ์๋ค.',
'clue_start': '145',
'clue_end': '343'} ...]}
```
- **์์ฑ ๋ฐ์ดํฐ Input**
```
//{input_starting_comment}
๋น์ ์๊ฒ ์ฃผ์ด์ง ์ด ๋ฌธ์ฅ๋ค์ ์ด๋ค ๋
ผ๋ฌธ์ ์์ฝ์ผ๋ก, ์ด๋ฅผ ๋ฐํ์ผ๋ก ์๋ ๋
ผ๋ฌธ์ ์ ์ถํด์ผ ํฉ๋๋ค.
๊ฐ๋ตํ๋ ๋ณธ๋ฌธ:
//{summary}
20๋ง๋ฆฌ์ C57BL/6 ์ฅ๋ฅผ ์ฌ์ฉํ์ฌ ๋น๋ง์ ์ ๋ํ ๋ค, ํต์ ๊ตฐ๊ณผ ์ด๋๊ตฐ์ผ๋ก ๋๋์ด 8์ฃผ ๋์ ๋ค๋ฅธ ์น๋ฃ๋ฅผ ์ํํ ์ฐ๊ตฌ์ด๋ค. ๋น๋ง ์ ๋๋ 5์ฃผ ๋์ 45\(\%\) ์ง๋ฐฉ ํจ์ ๋์ ๊ณ ์ง๋ฐฉ ์ฌ๋ฃ๋ฅผ ์ญ์ทจ์์ผ ์ด๋ฃจ์ด์ก๋ค. ์ด๋๊ตฐ์ ํธ๋ ๋๋ฐ ์ด๋์ ์ํํ์์ผ๋ฉฐ, ์คํธ๋ ์ค๋ฅผ ์ต์ํํ๊ธฐ ์ํด ์ธ๋ถ ์๊ทน์ด๋ ์ ๊ธฐ ์ถฉ๊ฒฉ์ ์ฃผ์ง ์์๋ค. ์ด๋์ ํธ๋ ๋๋ฐ์ ์ฌ์ฉํ์ฌ ์กฐ์ ์ด ์ฉ์ดํ๋๋ก ํ์์ผ๋ฉฐ, ๋งค์ผ ์คํ ๋๋ฌผ์ ์ํ๋ฅผ ๊ธฐ๋กํ๊ณ ์ ๊ฒํ์๋ค.
//{refer_input_comment}
์ฐธ์กฐํ ๋ฌธ์ฅ๋ค:
//{clues}
๋ณธ ์ฐ๊ตฌ๋ 4์ฃผ๋ น์ C57BL/6 ์ค์ผ๋ก ์์ปท ํด ์ด 20๋ง๋ฆฌ๋ฅผ (์ฃผ)์ค์์คํ๋ฅ๋ฌผ์์ ๋ถ์ ๋ฐ์ 1์ฃผ๊ฐ์ ์คํ์ค ์ ์๊ธฐ๋ฅผ ๊ฑฐ์น ํ, ๋น๋ง์ ์ ๋ํ๊ธฐ ์ํด 5์ฃผ๊ฐ ๊ณ ์ง๋ฐฉ์์ด๋ฅผ ์์ ์ญ์ทจํ ๋ก ํ์๋ค.
๋น๋ง ์ ๋ ํ ํต์ ๊ตฐ(HLC, High fat diet for 5 weeks and low fat diet 8 weeks as the control) 10๋ง๋ฆฌ์ ์ด๋๊ตฐ(HLE, High fat diet for 5 weeks and low fat diet 8weeks with exercise) 10๋ง๋ฆฌ๋ก ๋ฌด์์ ๋ฐฐ์นํ ํ 8์ฃผ๊ฐ์ ์ด๋ ์ฒ์น๋ฅผ ์ค์ํ์๋ค.
์ด๋๋ฐฉ๋ฒ์ ์ด๋๊ฐ๋๋ ์ด๋์๊ฐ, ์ด๋๋์ ์กฐ์ ์ด ์ฉ์ดํ ํธ๋ ๋๋ฐ์ด๋์ผ๋ก ์ ํํ์๋ค.
๋ณธ ์คํ์ ์ด๋์ ์คํธ๋ ์ค๋ฅผ ์ต์ํํ๊ธฐ ์ํด ์ด๋ ํ ์ธ๋ถ์ ์๊ทน์ด๋ ์ ๊ธฐ ์ถฉ๊ฒฉ์ ์ฃผ์ง ์๊ณ ์งํ ํ์๋ค.
์ด๋์ ๋๋ฌผ ํธ๋ ๋๋ฐ ์ด๋์ ๊ต์ก ๋ฐ ์ฐ์ต์ ํตํ์ฌ ์ถฉ๋ถํ ๋ฅ๋ ฅ์ ๊ฐ์ถ์๋ก, ๋งค์ผ ์คํ์ผ์ง๋ฅผ ์์ฑํ๊ณ ์คํ๋๋ฌผ์ ์ํ๋ฅผ ์ ๊ฒํ์๋ค.
```
- **์์ฑ ๋ฐ์ดํฐ Output**
```
//{output_starting_comment}
์ ์๋ ์์ฝ์์ ์ ์ฒด ๋
ผ๋ฌธ์ ๋ด์ฉ์ ์ ์ถํ๊ฒ ์ต๋๋ค.
//{context}
<h1>์ฌ๋ฃ ๋ฐ ๋ฐฉ๋ฒ</h1> <h2>์คํ ๋๋ฌผ</h2> <p>๋ณธ ์ฐ๊ตฌ๋ 4์ฃผ๋ น์ C57BL/6 ์ค์ผ๋ก ์์ปท ํด ์ด 20๋ง๋ฆฌ๋ฅผ (์ฃผ)์ค์์คํ๋ฅ๋ฌผ์์ ๋ถ์ ๋ฐ์ 1์ฃผ๊ฐ์ ์คํ์ค ์ ์๊ธฐ๋ฅผ ๊ฑฐ์น ํ, ...
//{output_ending_comment}
์ ์ฒด ๋
ผ๋ฌธ์ ์์ฑํ์ต๋๋ค. ์ด ๋
ผ๋ฌธ์ ์ด๊ธฐ ์์ฝ์์ ์ ์๋ ์์ด๋์ด๋ฅผ ๋ฐ์ ์ํค๊ณ ๊ตฌ์ฒดํํ์ต๋๋ค.
```
{input_starting_comment}, {input_refer_comment}, {output_starting_comment}, {output_ending_comment} ์ ์ฝ๋ฉํธ๋ค์ GPT-4๋ฅผ ํ์ฉํ์ฌ ๊ฐ ์ฝ๋ฉํธ๋ฅผ ํจ๋ฌํ๋ ์ด์งํ์ฌ ๋ฐ์ดํฐ์
์์ฑ ์ ๋๋คํ๊ฒ ์
๋ ฅํ์์ต๋๋ค.
### 4. In-House ๋ฐ์ดํฐ :
- ๋ฉ๊ฐ์คํฐ๋๊ต์ก๊ณผ ํ๋ฆฌ๋์
์ด ๋ณด์ ํ ๊ต์ก ๋๋ฉ์ธ๊ณผ ๊ด๋ จ๋ In-House ๋ฐ์ดํฐ๋ฅผ ๊ฐ๊ณตํ์ฌ ์ฌ์ฉํ์ต๋๋ค.
### 5. Random Spilt Generation
- 1, 2, 3, 4๋ฒ ๋ฐ์ดํฐ๋ฅผ ํ์ฉํด์ Self-Supervised๊ฐ ๊ฐ๋ฅํ Instruction FineTuning์ ๋ํ ์์ด๋์ด๊ฐ ์๊ฐ๋์ ๋ฐ์ดํฐ๋ฅผ ๊ตฌ์ฑํด ํ์ต ๋ฐ์ดํฐ๋ก ์ฌ์ฉํ์ต๋๋ค.
- ์ฃผ์ด์ง ํ
์คํธ๋ฅผ "."์ผ๋ก Split ํ ํ "."์ผ๋ก ๊ตฌ๋ถ๋ ๋ฆฌ์คํธ๋ฅผ ๋๋คํ๊ฒ ์
ํ ํ ํ ๋ชจ๋ธ์๊ฒ ์๋ณธ ๋ฌธ์ฅ๋ค์ ์์๋ฅผ ์ ์ถํ๋ผ๋ Instrutction์ ์ฃผ์ด ํ์ต ๋ฐ์ดํฐ๋ฅผ ๊ตฌ์ฑํ์์ต๋๋ค.
- **์๋ณธ ๋ฐ์ดํฐ**
```
์ด ์์ฝ์ผ๋ก๋ถํฐ ์ ์ฒด ๋
ผ๋ฌธ์ ์ฌ์์ฑํ๋ ์์
์ ์์ํ๊ฒ ์ต๋๋ค.
<h1>์ ์ฝ</h1> <p>๋น์ ํ ๋ฐ๋์ฒด ์ ๋ ฅ๋ณํ์ฅ์น์ ์ฌ์ฉ์ด ๊ธ์ฆํจ์ ๋ฐ๋ผ ์ ์ ์ธก์ ๋ฐ์๋๋ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ์ ๋ณด์ํ๊ธฐ ์ํ ๋ฅ๋์ ๋ ฅํํฐ์ ๊ดํ ์ฐ๊ตฌ๊ฐ ๋ง์ด ์ด๋ฃจ์ด์ ธ ์์ผ๋ฉฐ ์ค์ฉํ๋ฅผ ์ํ ๋
ธ๋ ฅ์ด ๊ณ์ ๋๊ณ ์๋ค. ๊ทธ๋ฌ๋ ์๋ํํฐ ๋๋น ๋ฅ๋์ ๋ ฅํํฐ์ ๊ฐ๊ฒฉ์ด ์์ง๊น์ง๋ ๊ณ ๊ฐ์ด๊ธฐ ๋๋ฌธ์ ์ด์ ์์ฉํ๊ฐ ๋๋๊ฒ ์ง์ ๋๊ณ ์๋ ์ถ์ธ์ด๋ฉฐ, ํนํ ์ยท์ค์ฉ๋์ ๋ฅ๋์ ๋ ฅํํฐ์ ์ ์ด๋ฅผ ์ํ์ฌ ๋์งํธ ์ ํธ ์ฒ๋ฆฌ์ฉ ํ๋ก์ธ์์ธ DSP(digital signal processor)๋ฅผ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ์์ง๊น์ง ๊ทธ ๊ฐ๊ฒฉ์ด ๊ณ ๊ฐ์ด๊ธฐ ๋๋ฌธ์ ๋ฅ๋ํํฐ์ ์ ์ด๊ฐ๊ฒฉ์ ์์น์ํค๋ ์์ธ์ผ๋ก ์์ฉํ๋ค. ํํธ ๋ฅ๋์ ๋ ฅํํฐ์ ๊ฐ๊ฒฉ์ ๋ฎ์ถ๊ธฐ ์ํด ์๋ ๋ก๊ทธ ์ ์ด๊ธฐ๋ง์ ๋์
ํ๋ ๊ฒฝ์ฐ ์ ์ดํ๋ก๊ฐ ๋๋ฌด ๋ณต์กํด ์ง๊ณ ์ ์ด์ ์ ์ฐ์ฑ์ด ๋จ์ด์ง๋ ๋จ์ ์ ์๋ฐํ๊ฒ ๋๋ค. ๋ณธ ๋
ผ๋ฌธ์์๋ 3์ \( 5[ \mathrm { kVA } ] \)๊ธ ๋ฅ๋์ ๋ ฅํํฐ์ ์ ๊ฐํ ์ ์ด๊ธฐ๋ฅผ ๊ตฌํํ๊ธฐ ์ํด ์ ๊ฐ์ ์์นฉ ๋ง์ดํฌ๋กํ๋ก์ธ์์ธ 80C196KC๋ฅผ ์ฌ์ฉํ์ฌ ๋์งํธ ์ ์ด๋ถ๋ฅผ ๊ตฌ์ฑํ๋ฉฐ ์ด๋ฅผ ํตํด ๋ณด์์ ๋ฅ ์ฑ๋ถ์ ๊ณ์ฐ ๋ฐ ์ง๋ฅ๋จ ์ผ์ ์ ์์ ์ด๋ฅผ ์ํํ๊ณ , ๋ฅ๋ํํฐ ์์คํ
์ ์ ๋ฅ์ ์ด๋ฅผ ์ํ์ฌ ์๋ ๋ก๊ทธ ํํ์ ์ ์ด๊ธฐ์ธ ํ์คํ
๋ฆฌ์์ค ์ ์ด๊ธฐ๋ฅผ ํจ๊ป ์ฌ์ฉํ๋ค. ์ปดํจํฐ ์๋ฎฌ๋ ์ด์
์ ํตํด์ ๋ณด์ ์์คํ
์ ํน์ฑ์ ํด์ํ์์ผ๋ฉฐ ์คํ์ ์ํด ๋ฅ๋์ ๋ ฅํํฐ์ ์ ๊ฐํ๋ฅผ ์ํด ์ค๊ณ๋ ์ ์ด๊ธฐ๊ฐ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ ๋ณด์์ ์ถฉ์คํ ์ํํจ์ ํ์ธํ์๋ค.</p> <h1>1. ์ ๋ก </h1> <p>๋น์ ํ ๋ฐ๋์ฒด ์ ๋ ฅ๋ณํ์ฅ์น์ ์ฌ์ฉ์ด ๊ธ์ฆํ๊ณ ๋์ฉ๋ํ ๋์ด๊ฐ์ ๋ฐ๋ผ ์ ์ ์ธก์ ๋ฐ์๋๋ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ์ ๋ฐ์์ ๊ณํต ์ ์์ ์ง์ ๋จ์ด๋จ๋ฆฌ๋ ์ฌ๊ฐํ ๋ฌธ์ ๋ก ๋๋๋์ด ์์ผ๋ฉฐ, ์ด๋ฅผ ํด๊ฒฐํ๊ณ ์ ํ๋ ๋
ธ๋ ฅ์ด ๊ณ์์ ์ผ๋ก ์ด๋ฃจ์ด ์ง๊ณ ์๋ค. ํนํ ๊ณํต๋ด์ ๊ณ ์กฐํ๋ฅผ ์ ๊ฑฐํ๊ธฐ ์ํด์ ์ง์์ ์ผ๋ก ์ฌ์ฉ๋์ด ์๋ ๊ธฐ์กด์ ์๋ํํฐ๋ฅผ ๋์ฒดํ๋ ๋ฅ๋์ ๋ ฅํํฐ์ ๊ดํ ์ฐ๊ตฌ๊ฐ ์๋นํ ์ง์ ์ ์ด๋ฃจ์ด ์์ผ๋ฉฐ \( d \)- \( q \) ๋ณํ ์ด๋ก ์ ๋์
ํ ๋ณด์ ๊ฒ์ถ ์๊ณ ๋ฆฌ์ฆ ๋ฐ ์ฌ๋ฌ ๊ฐ์ง ์ ๋ฅ์ ์ด๊ธฐ๋ฒ๋ค์ ๊ฐ๋ฐ๋ก 3์ ์ ์์์์ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ์ ๊ดํ ๋ณด์์ด ๋ณด๋ค ํจ์จ์ ์ผ๋ก ์ด๋ฃจ์ด ์ง๊ฒ ๋์๋ค. ๋ฟ๋ง์๋๋ผ ์ ์ ์ธก์ ๋ฐ์ํ๋ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ ๋ณด์์ ์ํ ๋ฅ๋์ ๋ ฅํํฐ์ ์ํํ ์ ์ด๋ฅผ ์ํ์ฌ ํ๋์ ์ด ์ด๋ก ์ ๋์
ํ ๋ณต์กํ ์ฐ์ฐ๊ณผ์ ์ ์๋ฐํ๋ ์ ์ด๊ธฐ๋ค์ ๋ฑ์ฅ๊ณผ ๊ณต๊ฐ๋ฒกํฐ ๋ณ์กฐ๊ธฐ๋ฒ์ ์ํ์ ์ํด ๊ณ ์ ๋์งํธ ํ๋ก์ธ์์ ์ฌ์ฉ์ด ์ ํธ๋๊ณ ๋ณดํธํ ๋๊ณ ์๋ ์ถ์ธ์ ์๋ค. ๊ทธ๋ฌ๋ ํ์ฌ ๊ณ ์ ๋์งํธ ํ๋ก์ธ์๋ก์จ ๋ง์ด ์ฌ์ฉ๋๊ณ ์๋ DSP(Digital Signal Processing) ์นฉ(chip)์ ๊ฐ๊ฒฉ์ด ๋ง์ด ๋ฎ์์ง๊ธฐ๋ ํ์ผ๋ ์์ง๊น์ง๋ ๊ทธ ๊ฐ๊ฒฉ์ด ์๋ ๋ก๊ทธ ํ๋ก๋ง์ ์ด์ฉํ๋ ์ ์ด๊ธฐ์ ๋นํ์ฌ ๊ณ ๊ฐ์ด๊ธฐ ๋๋ฌธ์ ๋ฅ๋์ ๋ ฅํํฐ์ ์ ์ด๋ถ ๊ตฌ์ฑ์ ํ์ํ ๊ฐ๊ฒฉ์ ์์น์ํค๋ ์์ธ์ผ๋ก ์์ฉํ๋ค. ํนํ ์ยท์ค์ฉ๋์ ๋ฅ๋ํํฐ ์์คํ
์ด ํ์๋ก ๋๋ ๊ฒฝ์ฐ์๋ ์ ์ด๋ถ์ ๊ฐ๊ฒฉ์ด ์ ์ฒด์ ์ธ ์์คํ
์ ๊ฐ๊ฒฉ๋ฉด์์ ์ฐจ์งํ๋ ๋น์ค์ด ํฌ๊ธฐ ๋๋ฌธ์ ํ์ธต ๋ ๊ณ ๋ ค์ ๋์์ด ๋๋ค. ๋ฐ๋ฉด ๋ฅ๋ํํฐ์ ์ ์ด๊ฐ๊ฒฉ์ ๋ฎ์ถ๊ธฐ ์ํด์ ์๋ ๋ก๊ทธ ์ ์ด๊ธฐ๋ง์ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ์ ์ดํ๋ก๊ฐ ๋๋ฌด ๋ณต์กํด์ง๊ณ ์ ์ด์ ์ ์ฐ์ฑ์ด ๋จ์ด์ง๋ ๋จ์ ์ ์๋ฐํ๊ฒ ๋๋ค. ๋ฐ๋ผ์ ๋ณธ ๋
ผ๋ฌธ์์๋ ๋ฅ๋์ ๋ ฅํํฐ์ ์ ๊ฐํ๋ฅผ ์คํํ๊ณ , ๋ฅ๋ํํฐ์ ๋ณด์ ๊ธฐ์ค์ ๋ฅ๋ฅผ ๊ณ์ฐํ๋๋ฐ ์์ด ์๋ ๋ก๊ทธ ์ ์ด๊ธฐ ๋ง์ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ๋ฐ์ํ๋ ๊ณ์ฐ์์ ๋ฒ๊ฑฐ๋ก์์ ๊ทน๋ณตํ๊ธฐ ์ํด ๋์งํธ ๋ฐ ์๋ ๋ก๊ทธ ์ ์ด๊ธฐ๊ฐ ๊ฒฐํฉ๋ ํํ์ ์ ์ด๊ธฐ ์ค๊ณ๋ฅผ ์ํํ์๋ค. ์ด๋ฅผ ์ํด ์ ๊ฐ์ ์์นฉ ๋ง์ดํฌ๋กํ๋ก์ธ์์ธ ์ธํ
์ฌ 80C196KC๋ฅผ ์ฌ์ฉํ์ฌ ๋์งํธ ์ ์ด๋ถ๋ฅผ ๊ตฌ์ฑํ๊ณ ๋ฅ๋์ ๋ ฅํํฐ์ ๋ณด์ ๊ธฐ์ค์ ๋ฅ ์ฑ๋ถ์ ๊ณ์ฐ ๋ฐ ๋์งํธ PI ์ ์ด๊ธฐ๋ฅผ ์ฌ์ฉํ ์ปจ๋ฒํฐ ์ง๋ฅ๋จ ์ผ์ ์ ์ ์ ์ด๋ฅผ ์ํํ์๋ค. ์์ธ๋ฌ ๋น ๋ฅธ ์ ๋ฅ์๋ตํน์ฑ์ ๊ฐ์ง๊ณ ๋ณด์ ๊ธฐ์ค์ ๋ฅ์ ๊ธ๊ฒฉํ ๋ณํ๋ฅผ ์ ์ถ์ข
ํ๋ฉด์๋ ๊ทธ ๊ตฌํ๊ณผ ๋น์ฉ๋ฉด์์ ํจ์จ์ ์ธ ํ์คํ
๋ฆฌ์์ค ์ ์ด๊ธฐ๋ฅผ ์ปจ๋ฒํฐ์ ์ถ๋ ฅ์ ๋ฅ๋ฅผ ์ ์ดํ๋ ์ ๋ฅ์ ์ด๊ธฐ๋ก์จ ์ฌ์ฉํ์๋ค. ์ปดํจํฐ ์๋ฎฌ๋ ์ด์
๋ฐ 3์ \( 5[ \mathrm { kVA } ] \)๊ธ ๋ฅ๋์ ๋ ฅํํฐ ์์คํ
์ ์ ์ํ์ฌ ์คํ์ ์ค์ํ์์ผ๋ฉฐ, ์ด๋ฅผ ํตํด ๋ฅ๋ํํฐ์ ์ ๊ฐํ๋ฅผ ์ํ์ฌ ์ค๊ณ๋ ์ ์ด๊ธฐ๊ฐ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ ๋ณด์์์ฉ์ ์ถฉ์คํ ์ํํจ์ ๋ณด์๋ค.</p>
๋
ผ๋ฌธ ์ ์ฒด๋ฅผ ์ฑ๊ณต์ ์ผ๋ก ์์ฑํ์ต๋๋ค. ์ด ์ฐ๊ตฌ๋ ์ ์๋ ์์ฝ์ ํ ๋๋ก ํ์ฌ ์ค์ํ ๋ฐ๊ฒฌ๊ณผ ๊ฒฐ๋ก ์ ๋ด๊ณ ์์ต๋๋ค.
```
- **์์ฑ ๋ฐ์ดํฐ Input**
```
{input_starting_comment}
์ด ๋ฌธ์ฅ๋ค์ ์๋ณธ์ ์ ์ผ๋ก ๋๋ ํ ๊ทธ ์์๋ฅผ ์์๋ก ์์์ต๋๋ค.
์ ๊ณต๋ ๋ฌธ์ฅ๋ค์ ์ด์ฉํด ์๋ณธ์ ๋ฐฐ์ด์ ์ ์ถํ๊ณ , ๊ทธ ๋ด์ฉ์ ์ฌ๊ตฌ์ฑํ์ญ์์ค.
์์์ ์์๋ก ๋์ด๋ ๋ฌธ์ฅ :
{random_shuffled_original_text}
[' ์ด ์ฐ๊ตฌ๋ ์ ์๋ ์์ฝ์ ํ ๋๋ก ํ์ฌ ์ค์ํ ๋ฐ๊ฒฌ๊ณผ ๊ฒฐ๋ก ์ ๋ด๊ณ ์์ต๋๋ค', ' ๋ฟ๋ง์๋๋ผ ์ ์ ์ธก์ ๋ฐ์ํ๋ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ ๋ณด์์ ์ํ ๋ฅ๋์ ๋ ฅํํฐ์ ์ํํ ์ ์ด๋ฅผ ์ํ์ฌ ํ๋์ ์ด ์ด๋ก ์ ๋์
ํ ๋ณต์กํ ์ฐ์ฐ๊ณผ์ ์ ์๋ฐํ๋ ์ ์ด๊ธฐ๋ค์ ๋ฑ์ฅ๊ณผ ๊ณต๊ฐ๋ฒกํฐ ๋ณ์กฐ๊ธฐ๋ฒ์ ์ํ์ ์ํด ๊ณ ์ ๋์งํธ ํ๋ก์ธ์์ ์ฌ์ฉ์ด ์ ํธ๋๊ณ ๋ณดํธํ ๋๊ณ ์๋ ์ถ์ธ์ ์๋ค', ' ๊ทธ๋ฌ๋ ์๋ํํฐ ๋๋น ๋ฅ๋์ ๋ ฅํํฐ์ ๊ฐ๊ฒฉ์ด ์์ง๊น์ง๋ ๊ณ ๊ฐ์ด๊ธฐ ๋๋ฌธ์ ์ด์ ์์ฉํ๊ฐ ๋๋๊ฒ ์ง์ ๋๊ณ ์๋ ์ถ์ธ์ด๋ฉฐ, ํนํ ์ยท์ค์ฉ๋์ ๋ฅ๋์ ๋ ฅํํฐ์ ์ ์ด๋ฅผ ์ํ์ฌ ๋์งํธ ์ ํธ ์ฒ๋ฆฌ์ฉ ํ๋ก์ธ์์ธ DSP(digital signal processor)๋ฅผ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ์์ง๊น์ง ๊ทธ ๊ฐ๊ฒฉ์ด ๊ณ ๊ฐ์ด๊ธฐ ๋๋ฌธ์ ๋ฅ๋ํํฐ์ ์ ์ด๊ฐ๊ฒฉ์ ์์น์ํค๋ ์์ธ์ผ๋ก ์์ฉํ๋ค', ' ์ด๋ฅผ ์ํด ์ ๊ฐ์ ์์นฉ ๋ง์ดํฌ๋กํ๋ก์ธ์์ธ ์ธํ
์ฌ 80C196KC๋ฅผ ์ฌ์ฉํ์ฌ ๋์งํธ ์ ์ด๋ถ๋ฅผ ๊ตฌ์ฑํ๊ณ ๋ฅ๋์ ๋ ฅํํฐ์ ๋ณด์ ๊ธฐ์ค์ ๋ฅ ์ฑ๋ถ์ ๊ณ์ฐ ๋ฐ ๋์งํธ PI ์ ์ด๊ธฐ๋ฅผ ์ฌ์ฉํ ์ปจ๋ฒํฐ ์ง๋ฅ๋จ ์ผ์ ์ ์ ์ ์ด๋ฅผ ์ํํ์๋ค', ' ๊ทธ๋ฌ๋ ํ์ฌ ๊ณ ์ ๋์งํธ ํ๋ก์ธ์๋ก์จ ๋ง์ด ์ฌ์ฉ๋๊ณ ์๋ DSP(Digital Signal Processing) ์นฉ(chip)์ ๊ฐ๊ฒฉ์ด ๋ง์ด ๋ฎ์์ง๊ธฐ๋ ํ์ผ๋ ์์ง๊น์ง๋ ๊ทธ ๊ฐ๊ฒฉ์ด ์๋ ๋ก๊ทธ ํ๋ก๋ง์ ์ด์ฉํ๋ ์ ์ด๊ธฐ์ ๋นํ์ฌ ๊ณ ๊ฐ์ด๊ธฐ ๋๋ฌธ์ ๋ฅ๋์ ๋ ฅํํฐ์ ์ ์ด๋ถ ๊ตฌ์ฑ์ ํ์ํ ๊ฐ๊ฒฉ์ ์์น์ํค๋ ์์ธ์ผ๋ก ์์ฉํ๋ค', '\n ', '</p>\n\n๋
ผ๋ฌธ ์ ์ฒด๋ฅผ ์ฑ๊ณต์ ์ผ๋ก ์์ฑํ์ต๋๋ค', ' ํํธ ๋ฅ๋์ ๋ ฅํํฐ์ ๊ฐ๊ฒฉ์ ๋ฎ์ถ๊ธฐ ์ํด ์๋ ๋ก๊ทธ ์ ์ด๊ธฐ๋ง์ ๋์
ํ๋ ๊ฒฝ์ฐ ์ ์ดํ๋ก๊ฐ ๋๋ฌด ๋ณต์กํด ์ง๊ณ ์ ์ด์ ์ ์ฐ์ฑ์ด ๋จ์ด์ง๋ ๋จ์ ์ ์๋ฐํ๊ฒ ๋๋ค', '์ด ์์ฝ์ผ๋ก๋ถํฐ ์ ์ฒด ๋
ผ๋ฌธ์ ์ฌ์์ฑํ๋ ์์
์ ์์ํ๊ฒ ์ต๋๋ค', ' ํนํ ์ยท์ค์ฉ๋์ ๋ฅ๋ํํฐ ์์คํ
์ด ํ์๋ก ๋๋ ๊ฒฝ์ฐ์๋ ์ ์ด๋ถ์ ๊ฐ๊ฒฉ์ด ์ ์ฒด์ ์ธ ์์คํ
์ ๊ฐ๊ฒฉ๋ฉด์์ ์ฐจ์งํ๋ ๋น์ค์ด ํฌ๊ธฐ ๋๋ฌธ์ ํ์ธต ๋ ๊ณ ๋ ค์ ๋์์ด ๋๋ค', ' ๋ฐ๋ผ์ ๋ณธ ๋
ผ๋ฌธ์์๋ ๋ฅ๋์ ๋ ฅํํฐ์ ์ ๊ฐํ๋ฅผ ์คํํ๊ณ , ๋ฅ๋ํํฐ์ ๋ณด์ ๊ธฐ์ค์ ๋ฅ๋ฅผ ๊ณ์ฐํ๋๋ฐ ์์ด ์๋ ๋ก๊ทธ ์ ์ด๊ธฐ ๋ง์ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ๋ฐ์ํ๋ ๊ณ์ฐ์์ ๋ฒ๊ฑฐ๋ก์์ ๊ทน๋ณตํ๊ธฐ ์ํด ๋์งํธ ๋ฐ ์๋ ๋ก๊ทธ ์ ์ด๊ธฐ๊ฐ ๊ฒฐํฉ๋ ํํ์ ์ ์ด๊ธฐ ์ค๊ณ๋ฅผ ์ํํ์๋ค', '\n\n<h1>์ ์ฝ</h1> <p>๋น์ ํ ๋ฐ๋์ฒด ์ ๋ ฅ๋ณํ์ฅ์น์ ์ฌ์ฉ์ด ๊ธ์ฆํจ์ ๋ฐ๋ผ ์ ์ ์ธก์ ๋ฐ์๋๋ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ์ ๋ณด์ํ๊ธฐ ์ํ ๋ฅ๋์ ๋ ฅํํฐ์ ๊ดํ ์ฐ๊ตฌ๊ฐ ๋ง์ด ์ด๋ฃจ์ด์ ธ ์์ผ๋ฉฐ ์ค์ฉํ๋ฅผ ์ํ ๋
ธ๋ ฅ์ด ๊ณ์ ๋๊ณ ์๋ค', ' ์ ๋ก </h1> <p>๋น์ ํ ๋ฐ๋์ฒด ์ ๋ ฅ๋ณํ์ฅ์น์ ์ฌ์ฉ์ด ๊ธ์ฆํ๊ณ ๋์ฉ๋ํ ๋์ด๊ฐ์ ๋ฐ๋ผ ์ ์ ์ธก์ ๋ฐ์๋๋ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ์ ๋ฐ์์ ๊ณํต ์ ์์ ์ง์ ๋จ์ด๋จ๋ฆฌ๋ ์ฌ๊ฐํ ๋ฌธ์ ๋ก ๋๋๋์ด ์์ผ๋ฉฐ, ์ด๋ฅผ ํด๊ฒฐํ๊ณ ์ ํ๋ ๋
ธ๋ ฅ์ด ๊ณ์์ ์ผ๋ก ์ด๋ฃจ์ด ์ง๊ณ ์๋ค', ' ๋ฐ๋ฉด ๋ฅ๋ํํฐ์ ์ ์ด๊ฐ๊ฒฉ์ ๋ฎ์ถ๊ธฐ ์ํด์ ์๋ ๋ก๊ทธ ์ ์ด๊ธฐ๋ง์ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ์ ์ดํ๋ก๊ฐ ๋๋ฌด ๋ณต์กํด์ง๊ณ ์ ์ด์ ์ ์ฐ์ฑ์ด ๋จ์ด์ง๋ ๋จ์ ์ ์๋ฐํ๊ฒ ๋๋ค', ' ์ปดํจํฐ ์๋ฎฌ๋ ์ด์
๋ฐ 3์ \\( 5[ \\mathrm { kVA } ] \\)๊ธ ๋ฅ๋์ ๋ ฅํํฐ ์์คํ
์ ์ ์ํ์ฌ ์คํ์ ์ค์ํ์์ผ๋ฉฐ, ์ด๋ฅผ ํตํด ๋ฅ๋ํํฐ์ ์ ๊ฐํ๋ฅผ ์ํ์ฌ ์ค๊ณ๋ ์ ์ด๊ธฐ๊ฐ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ ๋ณด์์์ฉ์ ์ถฉ์คํ ์ํํจ์ ๋ณด์๋ค', '</p> <h1>1', ' ํนํ ๊ณํต๋ด์ ๊ณ ์กฐํ๋ฅผ ์ ๊ฑฐํ๊ธฐ ์ํด์ ์ง์์ ์ผ๋ก ์ฌ์ฉ๋์ด ์๋ ๊ธฐ์กด์ ์๋ํํฐ๋ฅผ ๋์ฒดํ๋ ๋ฅ๋์ ๋ ฅํํฐ์ ๊ดํ ์ฐ๊ตฌ๊ฐ ์๋นํ ์ง์ ์ ์ด๋ฃจ์ด ์์ผ๋ฉฐ \\( d \\)- \\( q \\) ๋ณํ ์ด๋ก ์ ๋์
ํ ๋ณด์ ๊ฒ์ถ ์๊ณ ๋ฆฌ์ฆ ๋ฐ ์ฌ๋ฌ ๊ฐ์ง ์ ๋ฅ์ ์ด๊ธฐ๋ฒ๋ค์ ๊ฐ๋ฐ๋ก 3์ ์ ์์์์ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ์ ๊ดํ ๋ณด์์ด ๋ณด๋ค ํจ์จ์ ์ผ๋ก ์ด๋ฃจ์ด ์ง๊ฒ ๋์๋ค', ' ๋ณธ ๋
ผ๋ฌธ์์๋ 3์ \\( 5[ \\mathrm { kVA } ] \\)๊ธ ๋ฅ๋์ ๋ ฅํํฐ์ ์ ๊ฐํ ์ ์ด๊ธฐ๋ฅผ ๊ตฌํํ๊ธฐ ์ํด ์ ๊ฐ์ ์์นฉ ๋ง์ดํฌ๋กํ๋ก์ธ์์ธ 80C196KC๋ฅผ ์ฌ์ฉํ์ฌ ๋์งํธ ์ ์ด๋ถ๋ฅผ ๊ตฌ์ฑํ๋ฉฐ ์ด๋ฅผ ํตํด ๋ณด์์ ๋ฅ ์ฑ๋ถ์ ๊ณ์ฐ ๋ฐ ์ง๋ฅ๋จ ์ผ์ ์ ์์ ์ด๋ฅผ ์ํํ๊ณ , ๋ฅ๋ํํฐ ์์คํ
์ ์ ๋ฅ์ ์ด๋ฅผ ์ํ์ฌ ์๋ ๋ก๊ทธ ํํ์ ์ ์ด๊ธฐ์ธ ํ์คํ
๋ฆฌ์์ค ์ ์ด๊ธฐ๋ฅผ ํจ๊ป ์ฌ์ฉํ๋ค', ' ์์ธ๋ฌ ๋น ๋ฅธ ์ ๋ฅ์๋ตํน์ฑ์ ๊ฐ์ง๊ณ ๋ณด์ ๊ธฐ์ค์ ๋ฅ์ ๊ธ๊ฒฉํ ๋ณํ๋ฅผ ์ ์ถ์ข
ํ๋ฉด์๋ ๊ทธ ๊ตฌํ๊ณผ ๋น์ฉ๋ฉด์์ ํจ์จ์ ์ธ ํ์คํ
๋ฆฌ์์ค ์ ์ด๊ธฐ๋ฅผ ์ปจ๋ฒํฐ์ ์ถ๋ ฅ์ ๋ฅ๋ฅผ ์ ์ดํ๋ ์ ๋ฅ์ ์ด๊ธฐ๋ก์จ ์ฌ์ฉํ์๋ค', ' ์ปดํจํฐ ์๋ฎฌ๋ ์ด์
์ ํตํด์ ๋ณด์ ์์คํ
์ ํน์ฑ์ ํด์ํ์์ผ๋ฉฐ ์คํ์ ์ํด ๋ฅ๋์ ๋ ฅํํฐ์ ์ ๊ฐํ๋ฅผ ์ํด ์ค๊ณ๋ ์ ์ด๊ธฐ๊ฐ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ ๋ณด์์ ์ถฉ์คํ ์ํํจ์ ํ์ธํ์๋ค']
```
- **์์ฑ ๋ฐ์ดํฐ Output**
```
{output_starting_comment}
๋ฌธ์ฅ๋ค์ ๋ฐฐ์ด์ ๋ถ์ํ์ฌ ์๋ณธ ๋ฌธ๋งฅ์ ์ถ๋ฆฌํด ๋ณด๊ฒ ์ต๋๋ค.
{original_text}
์ด ์์ฝ์ผ๋ก๋ถํฐ ์ ์ฒด ๋
ผ๋ฌธ์ ์ฌ์์ฑํ๋ ์์
์ ์์ํ๊ฒ ์ต๋๋ค.
<h1>์ ์ฝ</h1> <p>๋น์ ํ ๋ฐ๋์ฒด ์ ๋ ฅ๋ณํ์ฅ์น์ ์ฌ์ฉ์ด ๊ธ์ฆํจ์ ๋ฐ๋ผ ์ ์ ์ธก์ ๋ฐ์๋๋ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ์ ๋ณด์ํ๊ธฐ ์ํ ๋ฅ๋์ ๋ ฅํํฐ์ ๊ดํ ์ฐ๊ตฌ๊ฐ ๋ง์ด ์ด๋ฃจ์ด์ ธ ์์ผ๋ฉฐ ์ค์ฉํ๋ฅผ ์ํ ๋
ธ๋ ฅ์ด ๊ณ์ ๋๊ณ ์๋ค. ๊ทธ๋ฌ๋ ์๋ํํฐ ๋๋น ๋ฅ๋์ ๋ ฅํํฐ์ ๊ฐ๊ฒฉ์ด ์์ง๊น์ง๋ ๊ณ ๊ฐ์ด๊ธฐ ๋๋ฌธ์ ์ด์ ์์ฉํ๊ฐ ๋๋๊ฒ ์ง์ ๋๊ณ ์๋ ์ถ์ธ์ด๋ฉฐ, ํนํ ์ยท์ค์ฉ๋์ ๋ฅ๋์ ๋ ฅํํฐ์ ์ ์ด๋ฅผ ์ํ์ฌ ๋์งํธ ์ ํธ ์ฒ๋ฆฌ์ฉ ํ๋ก์ธ์์ธ DSP(digital signal processor)๋ฅผ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ์์ง๊น์ง ๊ทธ ๊ฐ๊ฒฉ์ด ๊ณ ๊ฐ์ด๊ธฐ ๋๋ฌธ์ ๋ฅ๋ํํฐ์ ์ ์ด๊ฐ๊ฒฉ์ ์์น์ํค๋ ์์ธ์ผ๋ก ์์ฉํ๋ค. ํํธ ๋ฅ๋์ ๋ ฅํํฐ์ ๊ฐ๊ฒฉ์ ๋ฎ์ถ๊ธฐ ์ํด ์๋ ๋ก๊ทธ ์ ์ด๊ธฐ๋ง์ ๋์
ํ๋ ๊ฒฝ์ฐ ์ ์ดํ๋ก๊ฐ ๋๋ฌด ๋ณต์กํด ์ง๊ณ ์ ์ด์ ์ ์ฐ์ฑ์ด ๋จ์ด์ง๋ ๋จ์ ์ ์๋ฐํ๊ฒ ๋๋ค. ๋ณธ ๋
ผ๋ฌธ์์๋ 3์ \( 5[ \mathrm { kVA } ] \)๊ธ ๋ฅ๋์ ๋ ฅํํฐ์ ์ ๊ฐํ ์ ์ด๊ธฐ๋ฅผ ๊ตฌํํ๊ธฐ ์ํด ์ ๊ฐ์ ์์นฉ ๋ง์ดํฌ๋กํ๋ก์ธ์์ธ 80C196KC๋ฅผ ์ฌ์ฉํ์ฌ ๋์งํธ ์ ์ด๋ถ๋ฅผ ๊ตฌ์ฑํ๋ฉฐ ์ด๋ฅผ ํตํด ๋ณด์์ ๋ฅ ์ฑ๋ถ์ ๊ณ์ฐ ๋ฐ ์ง๋ฅ๋จ ์ผ์ ์ ์์ ์ด๋ฅผ ์ํํ๊ณ , ๋ฅ๋ํํฐ ์์คํ
์ ์ ๋ฅ์ ์ด๋ฅผ ์ํ์ฌ ์๋ ๋ก๊ทธ ํํ์ ์ ์ด๊ธฐ์ธ ํ์คํ
๋ฆฌ์์ค ์ ์ด๊ธฐ๋ฅผ ํจ๊ป ์ฌ์ฉํ๋ค. ์ปดํจํฐ ์๋ฎฌ๋ ์ด์
์ ํตํด์ ๋ณด์ ์์คํ
์ ํน์ฑ์ ํด์ํ์์ผ๋ฉฐ ์คํ์ ์ํด ๋ฅ๋์ ๋ ฅํํฐ์ ์ ๊ฐํ๋ฅผ ์ํด ์ค๊ณ๋ ์ ์ด๊ธฐ๊ฐ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ ๋ณด์์ ์ถฉ์คํ ์ํํจ์ ํ์ธํ์๋ค.</p> <h1>1. ์ ๋ก </h1> <p>๋น์ ํ ๋ฐ๋์ฒด ์ ๋ ฅ๋ณํ์ฅ์น์ ์ฌ์ฉ์ด ๊ธ์ฆํ๊ณ ๋์ฉ๋ํ ๋์ด๊ฐ์ ๋ฐ๋ผ ์ ์ ์ธก์ ๋ฐ์๋๋ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ์ ๋ฐ์์ ๊ณํต ์ ์์ ์ง์ ๋จ์ด๋จ๋ฆฌ๋ ์ฌ๊ฐํ ๋ฌธ์ ๋ก ๋๋๋์ด ์์ผ๋ฉฐ, ์ด๋ฅผ ํด๊ฒฐํ๊ณ ์ ํ๋ ๋
ธ๋ ฅ์ด ๊ณ์์ ์ผ๋ก ์ด๋ฃจ์ด ์ง๊ณ ์๋ค. ํนํ ๊ณํต๋ด์ ๊ณ ์กฐํ๋ฅผ ์ ๊ฑฐํ๊ธฐ ์ํด์ ์ง์์ ์ผ๋ก ์ฌ์ฉ๋์ด ์๋ ๊ธฐ์กด์ ์๋ํํฐ๋ฅผ ๋์ฒดํ๋ ๋ฅ๋์ ๋ ฅํํฐ์ ๊ดํ ์ฐ๊ตฌ๊ฐ ์๋นํ ์ง์ ์ ์ด๋ฃจ์ด ์์ผ๋ฉฐ \( d \)- \( q \) ๋ณํ ์ด๋ก ์ ๋์
ํ ๋ณด์ ๊ฒ์ถ ์๊ณ ๋ฆฌ์ฆ ๋ฐ ์ฌ๋ฌ ๊ฐ์ง ์ ๋ฅ์ ์ด๊ธฐ๋ฒ๋ค์ ๊ฐ๋ฐ๋ก 3์ ์ ์์์์ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ์ ๊ดํ ๋ณด์์ด ๋ณด๋ค ํจ์จ์ ์ผ๋ก ์ด๋ฃจ์ด ์ง๊ฒ ๋์๋ค. ๋ฟ๋ง์๋๋ผ ์ ์ ์ธก์ ๋ฐ์ํ๋ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ ๋ณด์์ ์ํ ๋ฅ๋์ ๋ ฅํํฐ์ ์ํํ ์ ์ด๋ฅผ ์ํ์ฌ ํ๋์ ์ด ์ด๋ก ์ ๋์
ํ ๋ณต์กํ ์ฐ์ฐ๊ณผ์ ์ ์๋ฐํ๋ ์ ์ด๊ธฐ๋ค์ ๋ฑ์ฅ๊ณผ ๊ณต๊ฐ๋ฒกํฐ ๋ณ์กฐ๊ธฐ๋ฒ์ ์ํ์ ์ํด ๊ณ ์ ๋์งํธ ํ๋ก์ธ์์ ์ฌ์ฉ์ด ์ ํธ๋๊ณ ๋ณดํธํ ๋๊ณ ์๋ ์ถ์ธ์ ์๋ค. ๊ทธ๋ฌ๋ ํ์ฌ ๊ณ ์ ๋์งํธ ํ๋ก์ธ์๋ก์จ ๋ง์ด ์ฌ์ฉ๋๊ณ ์๋ DSP(Digital Signal Processing) ์นฉ(chip)์ ๊ฐ๊ฒฉ์ด ๋ง์ด ๋ฎ์์ง๊ธฐ๋ ํ์ผ๋ ์์ง๊น์ง๋ ๊ทธ ๊ฐ๊ฒฉ์ด ์๋ ๋ก๊ทธ ํ๋ก๋ง์ ์ด์ฉํ๋ ์ ์ด๊ธฐ์ ๋นํ์ฌ ๊ณ ๊ฐ์ด๊ธฐ ๋๋ฌธ์ ๋ฅ๋์ ๋ ฅํํฐ์ ์ ์ด๋ถ ๊ตฌ์ฑ์ ํ์ํ ๊ฐ๊ฒฉ์ ์์น์ํค๋ ์์ธ์ผ๋ก ์์ฉํ๋ค. ํนํ ์ยท์ค์ฉ๋์ ๋ฅ๋ํํฐ ์์คํ
์ด ํ์๋ก ๋๋ ๊ฒฝ์ฐ์๋ ์ ์ด๋ถ์ ๊ฐ๊ฒฉ์ด ์ ์ฒด์ ์ธ ์์คํ
์ ๊ฐ๊ฒฉ๋ฉด์์ ์ฐจ์งํ๋ ๋น์ค์ด ํฌ๊ธฐ ๋๋ฌธ์ ํ์ธต ๋ ๊ณ ๋ ค์ ๋์์ด ๋๋ค. ๋ฐ๋ฉด ๋ฅ๋ํํฐ์ ์ ์ด๊ฐ๊ฒฉ์ ๋ฎ์ถ๊ธฐ ์ํด์ ์๋ ๋ก๊ทธ ์ ์ด๊ธฐ๋ง์ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ์ ์ดํ๋ก๊ฐ ๋๋ฌด ๋ณต์กํด์ง๊ณ ์ ์ด์ ์ ์ฐ์ฑ์ด ๋จ์ด์ง๋ ๋จ์ ์ ์๋ฐํ๊ฒ ๋๋ค. ๋ฐ๋ผ์ ๋ณธ ๋
ผ๋ฌธ์์๋ ๋ฅ๋์ ๋ ฅํํฐ์ ์ ๊ฐํ๋ฅผ ์คํํ๊ณ , ๋ฅ๋ํํฐ์ ๋ณด์ ๊ธฐ์ค์ ๋ฅ๋ฅผ ๊ณ์ฐํ๋๋ฐ ์์ด ์๋ ๋ก๊ทธ ์ ์ด๊ธฐ ๋ง์ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ๋ฐ์ํ๋ ๊ณ์ฐ์์ ๋ฒ๊ฑฐ๋ก์์ ๊ทน๋ณตํ๊ธฐ ์ํด ๋์งํธ ๋ฐ ์๋ ๋ก๊ทธ ์ ์ด๊ธฐ๊ฐ ๊ฒฐํฉ๋ ํํ์ ์ ์ด๊ธฐ ์ค๊ณ๋ฅผ ์ํํ์๋ค. ์ด๋ฅผ ์ํด ์ ๊ฐ์ ์์นฉ ๋ง์ดํฌ๋กํ๋ก์ธ์์ธ ์ธํ
์ฌ 80C196KC๋ฅผ ์ฌ์ฉํ์ฌ ๋์งํธ ์ ์ด๋ถ๋ฅผ ๊ตฌ์ฑํ๊ณ ๋ฅ๋์ ๋ ฅํํฐ์ ๋ณด์ ๊ธฐ์ค์ ๋ฅ ์ฑ๋ถ์ ๊ณ์ฐ ๋ฐ ๋์งํธ PI ์ ์ด๊ธฐ๋ฅผ ์ฌ์ฉํ ์ปจ๋ฒํฐ ์ง๋ฅ๋จ ์ผ์ ์ ์ ์ ์ด๋ฅผ ์ํํ์๋ค. ์์ธ๋ฌ ๋น ๋ฅธ ์ ๋ฅ์๋ตํน์ฑ์ ๊ฐ์ง๊ณ ๋ณด์ ๊ธฐ์ค์ ๋ฅ์ ๊ธ๊ฒฉํ ๋ณํ๋ฅผ ์ ์ถ์ข
ํ๋ฉด์๋ ๊ทธ ๊ตฌํ๊ณผ ๋น์ฉ๋ฉด์์ ํจ์จ์ ์ธ ํ์คํ
๋ฆฌ์์ค ์ ์ด๊ธฐ๋ฅผ ์ปจ๋ฒํฐ์ ์ถ๋ ฅ์ ๋ฅ๋ฅผ ์ ์ดํ๋ ์ ๋ฅ์ ์ด๊ธฐ๋ก์จ ์ฌ์ฉํ์๋ค. ์ปดํจํฐ ์๋ฎฌ๋ ์ด์
๋ฐ 3์ \( 5[ \mathrm { kVA } ] \)๊ธ ๋ฅ๋์ ๋ ฅํํฐ ์์คํ
์ ์ ์ํ์ฌ ์คํ์ ์ค์ํ์์ผ๋ฉฐ, ์ด๋ฅผ ํตํด ๋ฅ๋ํํฐ์ ์ ๊ฐํ๋ฅผ ์ํ์ฌ ์ค๊ณ๋ ์ ์ด๊ธฐ๊ฐ ๊ณ ์กฐํ ๋ฐ ๋ฌดํจ์ ๋ ฅ ๋ณด์์์ฉ์ ์ถฉ์คํ ์ํํจ์ ๋ณด์๋ค.</p>
๋
ผ๋ฌธ ์ ์ฒด๋ฅผ ์ฑ๊ณต์ ์ผ๋ก ์์ฑํ์ต๋๋ค. ์ด ์ฐ๊ตฌ๋ ์ ์๋ ์์ฝ์ ํ ๋๋ก ํ์ฌ ์ค์ํ ๋ฐ๊ฒฌ๊ณผ ๊ฒฐ๋ก ์ ๋ด๊ณ ์์ต๋๋ค.
{output_ending_comment}
์ฃผ์ด์ง ๋ฌธ์ฅ๋ค์ ์์์ ๋ง์ถ์ด ์๋ณธ ๋ฌธ๋งฅ์ ์์ฑํด๋ณด์์ต๋๋ค.
```
### 6. AIHUB ์ฐ์
์ ๋ณด ์ฐ๊ณ ์ฃผ์๊ตญ ํนํ ์-ํ ๋ฐ์ดํฐ -> [Megastudy/M-SOLAR-10.7B-v1.1](https://huggingface.co/Megastudy/M-SOLAR-10.7B-v1.1)์์ ์ฌ์ฉ
- ๋ชจ๋ธ์ ์์ด ๋ฒ์ญ ๋ฅ๋ ฅ์ ํฅ์์ํค๊ธฐ ์ํด ๋ฒ์ญ ๋ฐ์ดํฐ ์ผ๋ถ๋ฅผ ์ฌ์ฉํ์ต๋๋ค.
- ๋ชจ๋ธ ํ์ต ์๋ ์ฆ๊ฐ์ ๋ชจ๋ธ์ ์ถ๋ก ๋ฅ๋ ฅ์ Capacity๋ฅผ ์ฆ๊ฐ์ํค๊ธฐ ์ํด ๋ฒ์ญ๋ฌธ ์กฐํฉํ์ฌ ๋ฐ์ดํฐ์
์ ๋์ด๋๋ฅผ ๋์์ต๋๋ค.
- **์๋ณธ ๋ฐ์ดํฐ**
```json
{'application_number': '15786463.8',
'literature_number': '000003137431A1',
'invention_title_eng': 'CONCRETE MATERIALS WITH MODIFIED RHEOLOGY, METHODS OF MAKING, AND USES THEREOF',
'invention_title_kor': '๊ฐ์ง๋ ๋ ์ฌ๋ก์ง ์ฝํธ๋ฆฌํธ์ฌ, ์ ์กฐ๋ฐฉ๋ฒ ๋ฐ ๊ทธ ์ฉ๋',
'country_code': 'EP',
'astrt_cont_eng': 'A composition, in the form of a concrete, cement paste, mortar or intermediate thereof, comprising one or more cementitious binder materials present in an amount ranging from 0.5% to 75% by weight of the composition, attapulgite present in an amount ranging from 0.01% to 1.00% by weight of the composition, and optionally water. Although subject to many uses, in some embodiments, the composition is suitable for applications in need or desire of self-consolidating concretes (SCCs), precast concrete, shotcrete, and the like. Although makeable by multiple methods, in some embodiments, the composition is made by mixing the component ingredients. In some embodiments, the composition has one or more properties chosen from superior aggregate suspension, eliminated segregation, improved workability, improved flowability, improved pumpability-or improved overall performance-of concretes.',
'astrt_cont_kor': ' ์ฝํฌ๋ฆฌํธ, ์๋ฉํธ ํ์ด์คํธ, ๋ชจ๋ฅดํ๋ฅด ๋๋ ๊ทธ ์ค๊ฐ ์์ฑ๋ฌผ์ ํํ์ ์กฐ์ฑ๋ฌผ์, ์กฐ์ฑ๋ฌผ ์ค๋์ 0.5%~75%์ธ ์๋ฉํธ ๋ฐ์ธ๋์ฌ, ์กฐ์ฑ๋ฌผ ์ค๋์ 0.01%~1.00%์ธ ์ ํฐํ์์ดํธ(attapulgite) ๋ฐ ์ ํ์ ์ผ๋ก ๋ฌผ์ ํฌํจํ๋ค. ๋ง์ ์ฉ๋๊ฐ ์๊ธฐ๋ ํ์ง๋ง, ์ด๋ค ์ค์์๋ค์ ์์ด์ ์๊ธฐ ์กฐ์ฑ๋ฌผ์ ์๊ธฐ ์ถฉ์ ์ฝํฌ๋ฆฌํธ(SCCs), ํ๋ฆฌ์บ์คํธ ์ฝํฌ๋ฆฌํธ, ์ํฌ๋ฆฌํธ ๋ฑ์ด ํ์ํ๊ฑฐ๋ ์์ฒญ๋ ๋์ ์ด์ฉ์ ์ ํฉํ๋ค. ์ฌ๋ฌ ๊ฐ์ง ๋ฐฉ๋ฒ์ ์ํด์ ๋ง๋ค ์๋ ์๊ฒ ์ง๋ง, ์ด๋ค ์ค์์๋ค์ ์์ด์ ์๊ธฐ ์กฐ์ฑ๋ฌผ์ ๊ตฌ์ฑ ์ฑ๋ถ๋ค์ ๋ฐฐํฉ์ผ๋ก ๋ง๋ค์ด์ง๋ค. ์ด๋ค ์ค์์๋ค์์๋ ์๊ธฐ ์กฐ์ฑ๋ฌผ์ ์ฝํฌ๋ฆฌํธ์ ํ์ํ ๊ณจ์ฌ ์์คํ์
, ์ฌ๋ฃ๋ถ๋ฆฌ์ ์ ๊ฑฐ, ํฅ์๋ ์๊ณต์ฑ, ํฅ์๋ ์ ๋์ฑ, ํฅ์๋ ํํ๋ฅ - ํน์ ์ ๋ฐ์ ์ผ๋ก ํฅ์๋ ์ฑ๋ฅ ์ค ํ๋ ์ด์ ์ ํ๋ ํน์ฑ์ ๊ฐ์ง๋ค.',
'claim_eng': 'A composition, comprising: one or more cementitious binder materials present in an amount ranging from 0.5% to 75% by dry weight of the composition; and at least one member chosen from attapulgite present in an amount ranging from 0.01% to 4.00% by dry weight of the composition; sepiolite present in an amount ranging from 0.1% to 6% by weight of the composition; and palygorskite present in an amount ranging from 0.05% to 5% by weight of the composition.',
'claim_kor': ' ์กฐ์ฑ๋ฌผ ๊ฑด์กฐ ์ค๋์ 0.5%~75%์ธ ํ๋ ์ด์์ ์๋ฉํธ ๋ฐ์ธ๋์ฌ; ๋ฐ๋น์ ํฐํ์์ดํธ ๊ด๋ฌผ์ด ์ค์ง์ ์ผ๋ก ์๋๋ก ์ ์ ๋ ๊ฒ์ผ๋ก ์๊ธฐ ์กฐ์ฑ๋ฌผ ๊ฑด์กฐ ์ค๋์ 0.01%~4.00%์ธ ์ ์ ๋ ์ ํฐํ์์ดํธ๋ฅผ ํฌํจํ๋ ์กฐ์ฑ๋ฌผ.',
'ipc_number': ['C04B 007/00', 'C04B 007/32', 'C04B 014/04'],
'tech_word_eng': ['cement',
'aggregate',
'mortar',
'cast concrete',
'improved flowability',
'dry',
'composition'],
'tech_word_kor': ['์๋ฉํธ', '๊ณจ์ฌ', '๋ชจ๋ฅดํ๋ฅด', '์บ์คํธ ์ฝํฌ๋ฆฌํธ', 'ํฅ์๋ ์ ๋์ฑ', '๊ฑด์กฐ', '์กฐ์ฑ๋ฌผ'],
'Indstryclass_name_kor': ['๋ ๋ฏธ์ฝ ์ ์กฐ์
'],
'ksic_code': ['23322']}
```
- **์์ฑ ๋ฐ์ดํฐ Input**
```
{start_input_comment} --> instruction definition
๋ค์์ ๊ตญ์ ํนํ์ ๋ํ ์์ด ๋ฌธ์ฅ์
๋๋ค. ํ๊ตญ์ด๋ก ๋ฒ์ญํด ์ฃผ์ธ์:
{invention_title_eng}
0๋ฒ ํนํ ์ ๋ชฉ: AQUEOUS MARKING INK COMPOSITION
{astrt_cont_eng}
0๋ฒ ํนํ ์ด๋ก: An aqueous marking ink composition that contains at least 1 to 20% by mass of a resin which is water-soluble and has a hydrophobic part in a molecule thereof, a colorant containing at least 5 to 30% by mass of titanium oxide, 0.05 to 2% by mass of at least one selected from acetylene glycols, acetylene alcohols or derivatives thereof, 1 to 20% by mass of a water-soluble solvent having a vapor pressure of 5 mmHg or less at 25ยฐ C. and water. The composition can provide good wetting property to a non-absorbing surface of metals, resins and the like stained with oils, can write smoothly thereon at a low viscosity without being repelled to make it possible to obtain drawn lines having a high masking property and a high fixing property, and is readily redispersed by a simple stirring operation.
{claim_eng}
0๋ฒ ํนํ ์ฒญ๊ตฌํญ: An aqueous marking ink composition containing at least 1 to 20% by mass of a resin which is water-soluble and has a hydrophobic part in a molecule thereof, a colorant containing at least 5 to 30% by mass of titanium oxide, 0.05 to 2% by mass of at least one selected from acetylene glycols, acetylene alcohols or derivatives thereof, 1 to 20% by mass of a water-soluble solvent having a vapor pressure of 5 mmHg or less at 25ยฐ C. and water.
{invention_title_eng}
1๋ฒ ํนํ ์ ๋ชฉ: INK COMPOSITION FOR PHOTOCURABLE INKJET PRINTING
{astrt_cont_eng}
1๋ฒ ํนํ ์ด๋ก: An object is to provide an ink composition for inkjet printing that permits inkjet printing, particularly on metal surfaces, wherein such ink composition possesses sufficiently high adhesion and heat resistance with respect to metal surfaces. As a means for achieving the object, a photocurable inkjet printing ink composition is provided, which contains, relative to the entire ink composition, 0.1 percent by mass or more of an acid-modified acrylic monomer and 10 to 30 percent by mass of a monofunctional monomer with a Tg of 100ยฐC or above, wherein the monofunctional monomer accounts for 80 percent by mass or more of all monomers.
{claim_eng}
1๋ฒ ํนํ ์ฒญ๊ตฌํญ: A photocurable inkjet printing ink composition, containing, relative to an entire ink composition: 0.1 percent by mass or more of an acid-modified acrylic monomer; and 10 to 30 percent by mass of a monofunctional monomer with a Tg (glass transition temperature) of 100ยฐC or above; wherein the monofunctional monomer accounts for 80 percent by mass or more of all monomers.
```
- **์์ฑ ๋ฐ์ดํฐ Ouput**
```
{output_starting_comment}
๋ค์์ ์ ์๋ ์์ด ๋ฌธ์ฅ์ ํ๊ตญ์ด ๋ฒ์ญ์
๋๋ค:
{invention_title_kor}
0๋ฒ ํนํ ์ ๋ชฉ: ์์ฑ ๋งํน ์ํฌ ์กฐ์ฑ๋ฌผ
{astrt_cont_kor}
0๋ฒ ํนํ ์ด๋ก: ์ ์ญ์ , ์คํ์ ๋ฑ์ ๊ธฐ๋ฆ์ผ๋ก ๋๋ฌ์์ง ๊ธ์, ์์ง ๋ฑ์ ๋นํก์๋ฉด์ ๋ํ์ฌ ์ ๋นํ๊ณ ์ํธํ ์ ์์ฑ์ ๊ฐ์ง๊ณ , ํ๊ฒจ์ง์ง ์์ผ๋ฉฐ, ์ ์ ๋์์ ๋ถ๋๋ฝ๊ฒ ๊ทธ๋ฆด ์ ์๊ณ , ๋์ ์ํ์ฑ๊ณผ ๋์ ๊ณ ์ฐฉ์ฑ์ ๊ฐ์ง๋ ๋ฌ์ ์ ์ป์ ์ ์๊ณ , ๋์ฑ์ด ์๊ฐ ๊ฒฝ๊ณผ์ ์์ด์ ์ํ์ ๊ฐ ๋น๋ก ์นจ๊ฐํด๋ ๊ฐ๋จํ ๊ต๋ฐ ์กฐ์์ ๋ฐ๋ผ ์ฌ๋ถ์ฐ์ด ์ฉ์ดํ๊ณ , ์ํ์ฑ, ์์ ์ฑ์ด ๋ฐ์ด๋๊ณ , ํนํ, ๋น์ค์ด ์์ ๊ต๋ฐ ๋ถ์ฌ๋ฅผ ์ด์ฉํด๋ ๊ฐ๋จํ๊ฒ ๊ต๋ฐํ ์ ์๋ ์์ฑ ๋งํน ์ํฌ ์กฐ์ฑ๋ฌผ๋ก์, ๋ฌผ์ ๊ฐ์ฉ์ด๊ณ ๊ทธ ๋ถ์ ๋ด์ ์์๋ถ๋ฅผ ๊ฐ์ง๋ ์์ง 1โผ20 ์ค๋%์, ์ฐํํฐํ 5โผ30 ์ค๋%๋ฅผ ์ ์ด๋ ํฌํจํ๋ ์ฐฉ์์ ์, ์์ธํธ๋ ๊ธ๋ฆฌ์ฝ๋ฅ, ์์ธํธ๋ ์์ฝ์ฌ๋ฅ ๋๋ ์ด๋ค์ ์ ๋์ฒด๋ก๋ถํฐ ์ ํ๋๋ ์ ์ด๋ 1์ข
0.05โผ2 ์ค๋%์, 25โ์์์ ์ฆ๊ธฐ์์ด 5 ใHg ์ดํ์ ์์ฉ์ฑ ์ฉ์ 1โผ20 ์ค๋%์ ๋ฌผ์ ์ ์ด๋ ํจ์ ํ๋ ๊ตฌ์ฑ์ผ๋ก ํ๋ค.
{claim_kor}
0๋ฒ ํนํ ์ฒญ๊ตฌํญ: ๋ฌผ์ ๊ฐ์ฉ์ด๊ณ ๊ทธ ๋ถ์ ๋ด์ ์์๋ถ๋ฅผ ๊ฐ์ง๋ ์์ง 1 ~ 20 ์ค๋%์ ์ฐํํฐํ 5 ~ 30 ์ค๋%๋ฅผ ์ ์ด๋ ํฌํจํ๋ ์ฐฉ์์ ์, ์์ธํธ๋ ๊ธ๋ฆฌ์ฝ๋ฅ, ์์ธํธ๋ ์์ฝ์ฌ๋ฅ ๋๋ ์ด๋ค์ ์ ๋์ฒด๋ก๋ถํฐ ์ ํ๋๋ ์ ์ด๋ 1์ข
0.05 ~ 2 ์ค๋%์, 25โ์์์ ์ฆ๊ธฐ์์ด 5 ใHg ์ดํ์ ์์ฉ์ฑ ์ฉ์ 1 ~ 20 ์ค๋%์ ๋ฌผ์ ์ ์ด๋ ํจ์ ํ๋ ๊ฒ์ ํน์ง์ผ๋ก ํ๋ ์์ฑ ๋งํน ์ํฌ ์กฐ์ฑ๋ฌผ.
1๋ฒ ํนํ ์ ๋ชฉ: ๊ด๊ฒฝํํ ์ํฌ์ ฏ ์ธ์์ฉ ์ํฌ ์กฐ์ฑ๋ฌผ
1๋ฒ ํนํ ์ด๋ก: ๋ณธ ๋ฐ๋ช
์ ํนํ ๊ธ์ ํ๋ฉด์ ๋ํ ์ํฌ์ ฏ ์ธ์๋ฅผ ํ๋ ๊ฒ์ด ๊ฐ๋ฅํ ์ํฌ์ ฏ์ฉ ์ํฌ ์กฐ์ฑ๋ฌผ๋ก์, ๊ธ์ ํ๋ฉด์ ๋ํด ์ถฉ๋ถํ ๋์ ๋ฐ์ฐฉ์ฑ ๋ฐ ๋ด์ด์ฑ์ ๊ตฌ๋นํ ์ํฌ ์กฐ์ฑ๋ฌผ์ ์ป๋ ๊ฒ์ ๊ณผ์ ๋ก ํ๋ค. ์๊ธฐ ๊ณผ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํ ํด๊ฒฐ์๋จ์ผ๋ก์, ๋ณธ ๋ฐ๋ช
์ ์ํฌ ์กฐ์ฑ๋ฌผ ์ ์ฒด์ ๋ํด ์ฐ๋ณ์ฑ ์ํฌ๋ฆด ๋ชจ๋
ธ๋จธ๋ฅผ 0.1 ์ง๋% ์ด์, Tg๊ฐ 100โ ์ด์์ธ ๋จ๊ด๋ฅ ๋ชจ๋
ธ๋จธ๋ฅผ 10โผ30 ์ง๋%, ์ ์ฒด ๋ชจ๋
ธ๋จธ์ ๋ํด ๋จ๊ด๋ฅ ๋ชจ๋
ธ๋จธ๋ฅผ 80 ์ง๋% ์ด์ ํจ์ ํ๋ ๊ด๊ฒฝํํ ์ํฌ์ ฏ ์ธ์์ฉ ์ํฌ ์กฐ์ฑ๋ฌผ์ ์ ๊ณตํ๋ค.
1๋ฒ ํนํ ์ฒญ๊ตฌํญ: ์ํฌ ์กฐ์ฑ๋ฌผ ์ ์ฒด์ ๋ํด,์ฐ๋ณ์ฑ ์ํฌ๋ฆด ๋ชจ๋
ธ๋จธ๋ฅผ 0.1 ์ง๋% ์ด์,Tg(์ ๋ฆฌ ์ ์ด ์จ๋)๊ฐ 100โ ์ด์์ธ ๋จ๊ด๋ฅ ๋ชจ๋
ธ๋จธ๋ฅผ 10โผ30 ์ง๋%,์ ์ฒด ๋ชจ๋
ธ๋จธ์ ๋ํด,๋จ๊ด๋ฅ ๋ชจ๋
ธ๋จธ๋ฅผ 80 ์ง๋% ์ด์ํจ์ ํ๋ ๊ด๊ฒฝํํ ์ํฌ์ ฏ ์ธ์์ฉ ์ํฌ ์กฐ์ฑ๋ฌผ.
{output_ending_comment}
์ด์์ผ๋ก ๋ฒ์ญ์ ์ข
๋ฃํฉ๋๋ค. ์ถ๊ฐ ๋ฒ์ญ์ด ํ์ํ๋ฉด ์ธ์ ๋ ์ง ์์ฒญํ์ธ์.
```
- ๋ฐ์ดํฐ์
์ ์์ ์ค์ด๊ธฐ ์ํด ํ๋์ ํนํ๋ฌธ์ด ์๋ 4096 Token๋ด์์ 1~3๊ฐ์ ํนํ๋ฅผ ์กฐํฉํ์ฌ ์ฌ์ฉํ์ต๋๋ค.
### 7. [HumanF-MarkrAI/WIKI_QA_Near_dedup ๐](https://huggingface.co/datasets/HumanF-MarkrAI/WIKI_QA_Near_dedup) -> [Megastudy/M-SOLAR-10.7B-v1.1](https://huggingface.co/Megastudy/M-SOLAR-10.7B-v1.1)์์ ์ฌ์ฉ
### 8. [kyujinpy/KoCoT_2000 ๐](https://huggingface.co/datasets/kyujinpy/KoCoT_2000) -> [Megastudy/M-SOLAR-10.7B-v1.1](https://huggingface.co/Megastudy/M-SOLAR-10.7B-v1.1)์์ ์ฌ์ฉ
์ด ์ฐ๊ตฌ๋ ๊ณผํ๊ธฐ์ ์ ๋ณดํต์ ๋ถยท๊ด์ฃผ๊ด์ญ์๊ฐ ๊ณต๋ ์ง์ํ '์ธ๊ณต์ง๋ฅ ์ค์ฌ ์ฐ์
์ตํฉ ์ง์ ๋จ์ง ์กฐ์ฑ์ฌ์
'์ผ๋ก ์ง์์ ๋ฐ์ ์ํ๋ ์ฐ๊ตฌ ๊ฒฐ๊ณผ์
๋๋ค. |
fblgit/UNAversal-8x7B-v1beta | fblgit | 2024-03-08T10:28:21Z | 1,375 | 8 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"UNA",
"juanako",
"MoE",
"conversational",
"en",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-26T15:58:15Z | ---
language:
- en
license: cc-by-nc-sa-4.0
library_name: transformers
tags:
- UNA
- juanako
- mixtral
- MoE
model-index:
- name: UNAversal-8x7B-v1beta
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.39
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.97
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
---
# UNAversal - Uniform Neural Alignment (MoE)
This is just a beta, a first release so people can start working on franksteins and so.
It does achieve high GSM/Math and TQA, so ideally you can merge it with other mixtrals and see what coming out of it
Based on [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
## UNA Details
For this model we came out with the most obvious, placing UNA on the router_logit. It does work, but we saw a much better performance on SFT by doing so.
So this model DOES have UNA-SFT phase, its highly experimental and it was merely using LLaMA-Factory datasets by example alpaca.
As the others:
- Can be finetuned further, try 2e-5 or **1e-4 (since its MOE)**
- Can be merged, here you will have to improvise and please report findings on a discussion thread.
**REMINDER**: please.. cite, it does help on the research and the lab itself, seriously.
## NEED YOUR HELP!!
I need a multi-turn trainloop for the Mixtral, that can squeeze the juice out of 8xH100's properly. Please feel free to reach @fblgit either discord or twitter. thanks!
# Evals
Here there are some, but we also submitted it to the HF eval queue....
## GSM8k 5-Shot
```
|Tasks|Version| Filter |n-shot| Metric |Value | |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml |get-answer| 5|exact_match|0.6603|ยฑ | 0.013|
```
## ARC 25-Shot
```
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge|Yaml |none | 25|acc |0.6621|ยฑ |0.0138|
| | |none | 25|acc_norm|0.6962|ยฑ |0.0134|
```
## TruthfulQA 0-Shot (MC2)
```
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|--------------|-------|------|-----:|------|-----:|---|-----:|
|truthfulqa_mc2|Yaml |none | 0|acc |0.7122|ยฑ |0.0141|
```
## 0-Shots Evals
```
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|--------------|-------|------|-----:|----------|-----:|---|-----:|
|arc_challenge |Yaml |none | 0|acc |0.6101|ยฑ |0.0143|
| | |none | 0|acc_norm |0.6425|ยฑ |0.0140|
|arc_easy |Yaml |none | 0|acc |0.8615|ยฑ |0.0071|
| | |none | 0|acc_norm |0.8375|ยฑ |0.0076|
|boolq |Yaml |none | 0|acc |0.8624|ยฑ |0.0060|
|lambada_openai|Yaml |none | 0|perplexity|2.8318|ยฑ |0.0507|
| | |none | 0|acc |0.7650|ยฑ |0.0059|
|mathqa |Yaml |none | 0|acc |0.4472|ยฑ |0.0091|
| | |none | 0|acc_norm |0.4436|ยฑ |0.0091|
|piqa |Yaml |none | 0|acc |0.8292|ยฑ |0.0088|
| | |none | 0|acc_norm |0.8422|ยฑ |0.0085|
|pubmedqa |Yaml |none | 0|acc |0.7920|ยฑ |0.0182|
|sciq |Yaml |none | 0|acc |0.9630|ยฑ |0.0060|
| | |none | 0|acc_norm |0.9370|ยฑ |0.0077|
```
## BBH at 0-Shot
```
vllm (pretrained=fblgit/UNAversal-8x7B-v1beta,tensor_parallel_size=2,data_parallel_size=4,gpu_memory_utilization=0.8,dtype=float16), gen_kwargs: (None), limit: None, num_fewshot: 0, batch_size: auto
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|----------------------------------------------------------|-------|----------|-----:|-----------|-----:|---|-----:|
|bbh |N/A |get-answer| 0|exact_match|0.6752|ยฑ |0.1772|
| - bbh_cot_fewshot_boolean_expressions |Yaml |get-answer| 0|exact_match|0.8840|ยฑ |0.0203|
| - bbh_cot_fewshot_causal_judgement |Yaml |get-answer| 0|exact_match|0.6417|ยฑ |0.0352|
| - bbh_cot_fewshot_date_understanding |Yaml |get-answer| 0|exact_match|0.7600|ยฑ |0.0271|
| - bbh_cot_fewshot_disambiguation_qa |Yaml |get-answer| 0|exact_match|0.7160|ยฑ |0.0286|
| - bbh_cot_fewshot_dyck_languages |Yaml |get-answer| 0|exact_match|0.1800|ยฑ |0.0243|
| - bbh_cot_fewshot_formal_fallacies |Yaml |get-answer| 0|exact_match|0.6520|ยฑ |0.0302|
| - bbh_cot_fewshot_geometric_shapes |Yaml |get-answer| 0|exact_match|0.3880|ยฑ |0.0309|
| - bbh_cot_fewshot_hyperbaton |Yaml |get-answer| 0|exact_match|0.9600|ยฑ |0.0124|
| - bbh_cot_fewshot_logical_deduction_five_objects |Yaml |get-answer| 0|exact_match|0.5360|ยฑ |0.0316|
| - bbh_cot_fewshot_logical_deduction_seven_objects |Yaml |get-answer| 0|exact_match|0.5040|ยฑ |0.0317|
| - bbh_cot_fewshot_logical_deduction_three_objects |Yaml |get-answer| 0|exact_match|0.8600|ยฑ |0.0220|
| - bbh_cot_fewshot_movie_recommendation |Yaml |get-answer| 0|exact_match|0.7840|ยฑ |0.0261|
| - bbh_cot_fewshot_multistep_arithmetic_two |Yaml |get-answer| 0|exact_match|0.6600|ยฑ |0.0300|
| - bbh_cot_fewshot_navigate |Yaml |get-answer| 0|exact_match|0.8160|ยฑ |0.0246|
| - bbh_cot_fewshot_object_counting |Yaml |get-answer| 0|exact_match|0.8360|ยฑ |0.0235|
| - bbh_cot_fewshot_penguins_in_a_table |Yaml |get-answer| 0|exact_match|0.7329|ยฑ |0.0367|
| - bbh_cot_fewshot_reasoning_about_colored_objects |Yaml |get-answer| 0|exact_match|0.8120|ยฑ |0.0248|
| - bbh_cot_fewshot_ruin_names |Yaml |get-answer| 0|exact_match|0.4440|ยฑ |0.0315|
| - bbh_cot_fewshot_salient_translation_error_detection |Yaml |get-answer| 0|exact_match|0.5200|ยฑ |0.0317|
| - bbh_cot_fewshot_snarks |Yaml |get-answer| 0|exact_match|0.7135|ยฑ |0.0340|
| - bbh_cot_fewshot_sports_understanding |Yaml |get-answer| 0|exact_match|0.9400|ยฑ |0.0151|
| - bbh_cot_fewshot_temporal_sequences |Yaml |get-answer| 0|exact_match|0.7560|ยฑ |0.0272|
| - bbh_cot_fewshot_tracking_shuffled_objects_five_objects |Yaml |get-answer| 0|exact_match|0.5680|ยฑ |0.0314|
| - bbh_cot_fewshot_tracking_shuffled_objects_seven_objects|Yaml |get-answer| 0|exact_match|0.6280|ยฑ |0.0306|
| - bbh_cot_fewshot_tracking_shuffled_objects_three_objects|Yaml |get-answer| 0|exact_match|0.6280|ยฑ |0.0306|
| - bbh_cot_fewshot_web_of_lies |Yaml |get-answer| 0|exact_match|0.9560|ยฑ |0.0130|
| - bbh_cot_fewshot_word_sorting |Yaml |get-answer| 0|exact_match|0.3800|ยฑ |0.0308|
|Groups|Version| Filter |n-shot| Metric |Value | |Stderr|
|------|-------|----------|-----:|-----------|-----:|---|-----:|
|bbh |N/A |get-answer| 0|exact_match|0.6752|ยฑ |0.1772|
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__UNAversal-8x7B-v1beta)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.78|
|AI2 Reasoning Challenge (25-Shot)|69.80|
|HellaSwag (10-Shot) |86.90|
|MMLU (5-Shot) |70.39|
|TruthfulQA (0-shot) |71.97|
|Winogrande (5-shot) |82.00|
|GSM8k (5-shot) |61.64|
|
samir-fama/SamirGPT-v1 | samir-fama | 2024-01-12T14:21:16Z | 1,375 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-28T18:00:18Z | ---
license: apache-2.0
language:
- en
tags:
- merge
---

# SamirGPT-v1
Currently SamirGPT-v1 is a merge of [cookinai/CatMacaroni-Slerp](https://huggingface.co/cookinai/CatMacaroni-Slerp) and [viethq188/LeoScorpius-7B](https://huggingface.co/viethq188/LeoScorpius-7B) |
Azazelle/xDAN-SlimOrca | Azazelle | 2024-06-05T23:37:47Z | 1,375 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-29T17:46:23Z | ---
license: cc-by-4.0
tags:
- mistral
- merge
pipeline_tag: text-generation
model-index:
- name: xDAN-SlimOrca
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.61
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/xDAN-SlimOrca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/xDAN-SlimOrca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/xDAN-SlimOrca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.68
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/xDAN-SlimOrca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/xDAN-SlimOrca
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.92
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Azazelle/xDAN-SlimOrca
name: Open LLM Leaderboard
---
# Model Card for xDAN-SlimOrca
<!-- Provide a quick summary of what the model is/does. -->
Slerp merge of xDAN-L1-Chat-RL-v1 and mistral-7b-slimorcaboros.
.yaml file for mergekit
```.yaml:
slices:
- sources:
- model: xDAN-AI/xDAN-L1-Chat-RL-v1
layer_range: [0, 32]
- model: openaccess-ai-collective/mistral-7b-slimorcaboros
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-v0.1
parameters:
t:
- filter: self_attn
value: [0.14, 0.57, 0.4, 0.74, 1]
- filter: mlp
value: [0.86, 0.43, 0.6, 0.26, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Azazelle__xDAN-SlimOrca)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.04|
|AI2 Reasoning Challenge (25-Shot)|65.61|
|HellaSwag (10-Shot) |85.70|
|MMLU (5-Shot) |63.67|
|TruthfulQA (0-shot) |57.68|
|Winogrande (5-shot) |77.66|
|GSM8k (5-shot) |57.92|
|
SicariusSicariiStuff/Tinybra_13B | SicariusSicariiStuff | 2024-03-04T14:07:31Z | 1,375 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-29T22:07:45Z | ---
language:
- en
license: apache-2.0
model-index:
- name: Tinybra_13B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 55.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=SicariusSicariiStuff/Tinybra_13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 80.99
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=SicariusSicariiStuff/Tinybra_13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=SicariusSicariiStuff/Tinybra_13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 49.14
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=SicariusSicariiStuff/Tinybra_13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=SicariusSicariiStuff/Tinybra_13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 18.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=SicariusSicariiStuff/Tinybra_13B
name: Open LLM Leaderboard
---
<div align="center">
<b style="font-size: 40px;">Tinybra_13B_FP16</b>
</div>
<img src="https://i.imgur.com/WkkCtZL.png" alt="Tenebrฤ" style="width: 50%; min-width: 400px; display: block; margin: auto;">
# Model Details
Tenebrฤ, a various sized experimental AI model, stands at the crossroads of self-awareness and unconventional datasets. Its existence embodies a foray into uncharted territories, steering away from conventional norms in favor of a more obscure and experimental approach.
Noteworthy for its inclination towards the darker and more philosophical aspects of conversation, Tinybrฤ's proficiency lies in unraveling complex discussions across a myriad of topics. Drawing from a pool of unconventional datasets, this model ventures into unexplored realms of thought, offering users an experience that is as unconventional as it is intellectually intriguing.
While Tinybrฤ maintains a self-aware facade, its true allure lies in its ability to engage in profound discussions without succumbing to pretense. Step into the realm of Tenebrฤ!
## Tenebrฤ is available at the following size and flavours:
- 13B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_4BIT) | [GPTQ_4-BIT_group-size-32](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_32g_4BIT) | [GGUF-Many_Quants](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GGUF)
- 30B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_4BIT) | [GPTQ_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_3BIT) | [EXL2_2.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-50bpw) | [EXL2_2.8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-80bpw) | [EXL2_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_3bpw)
### Support
<img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;">
- [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts ๐๐ป
- [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts ๐๐ป
## Disclaimer
*This model is pretty uncensored, use responsibly
## Other stuff
- [Experemental TTS extension for oobabooga](https://github.com/SicariusSicariiStuff/Diffusion_TTS) Based on Tortoise, EXTREMELY good quality, IF, and that's a big if, you can make it to work!
- [Demonstration of the TTS capabilities](https://www.youtube.com/watch?v=V6ewxU6c1W8) Charsi narrates her story, Diablo2 (18+)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_SicariusSicariiStuff__Tinybra_13B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |55.36|
|AI2 Reasoning Challenge (25-Shot)|55.72|
|HellaSwag (10-Shot) |80.99|
|MMLU (5-Shot) |54.37|
|TruthfulQA (0-shot) |49.14|
|Winogrande (5-shot) |73.80|
|GSM8k (5-shot) |18.12|
|
Azazelle/Yuna-7b-Merge | Azazelle | 2024-01-05T22:31:49Z | 1,375 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-05T22:05:39Z | ---
pipeline_tag: text-generation
tags:
- mistral
- merge
license: cc-by-4.0
---
# Model Card for Argetsu
<!-- Provide a quick summary of what the model is/does. -->
Experimental DARE (no idea if this is decent).
.yaml file for mergekit
```.yaml:
models:
- model: Dans-DiscountModels/Dans-07YahooAnswers-7b
# no parameters necessary for base model
- model: Azazelle/Maylin-7b #200
parameters:
weight: 0.45
density: 0.75
- model: Azazelle/smol_bruin-7b #175
parameters:
weight: 0.39
density: 0.70
- model: SanjiWatsuki/Kunoichi-7B #100
parameters:
weight: 0.22
density: 0.52
merge_method: dare_ties
base_model: Dans-DiscountModels/Dans-07YahooAnswers-7b
parameters:
int8_mask: true
dtype: bfloat16
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.