modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
inokufu/flaubert-base-uncased-xnli-sts | inokufu | "2023-04-24T11:06:05Z" | 1,076 | 7 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"flaubert",
"feature-extraction",
"sentence-similarity",
"transformers",
"fr",
"xnli",
"stsb_multi_mt",
"dataset:xnli",
"dataset:stsb_multi_mt",
"arxiv:1809.05053",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-03-02T23:29:05Z" | ---
pipeline_tag: sentence-similarity
language: fr
tags:
- sentence-similarity
- transformers
- fr
- flaubert
- sentence-transformers
- feature-extraction
- xnli
- stsb_multi_mt
datasets:
- xnli
- stsb_multi_mt
---
# inokufu/flaubert-base-uncased-xnli-sts
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Details
This model is based on the French flaubert-base-uncased pre-trained model [1, 2].
It was then fine-tuned on a natural language inference task (XNLI) [3]. This task consists in training the model to recognize relations between sentences (contradiction, neutral, implication).
It was then fine-tuned on a text semantic similarity task (on STS-fr data) [4]. This task consists in training the model to estimate the similarity between two sentences.
This fine-tuning process allows our model to have a semantic representation of words that is much better than the one proposed by the base model.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Apprendre le python", "Devenir expert en comptabilitรฉ"]
model = SentenceTransformer('inokufu/flaubert-base-uncased-xnli-sts')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Apprendre le python", "Devenir expert en comptabilitรฉ"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('inokufu/flaubert-base-uncased-xnli-sts')
model = AutoModel.from_pretrained('inokufu/flaubert-base-uncased-xnli-sts')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
STS (fr) score: 83.07%
## Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: FlaubertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## References
[1] https://hal.archives-ouvertes.fr/hal-02784776v3/document <br>
[2] https://huggingface.co/flaubert/flaubert_base_uncased <br>
[3] https://arxiv.org/abs/1809.05053 <br>
[4] https://huggingface.co/datasets/stsb_multi_mt <br>
|
dvruette/llama-13b-pretrained-sft-epoch-2 | dvruette | "2023-04-04T21:17:37Z" | 1,076 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-04T19:47:40Z" | https://wandb.ai/open-assistant/supervised-finetuning/runs/gljcl75b |
reeducator/vicuna-13b-cocktail | reeducator | "2023-05-26T08:16:31Z" | 1,076 | 42 | transformers | [
"transformers",
"llama",
"text-generation",
"en",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:gozfarb/ShareGPT_Vicuna_unfiltered",
"dataset:gozfarb/bluemoon_roleplay_300k_vicuna",
"dataset:gozfarb/GPTeacher-Vicuna",
"dataset:gozfarb/SuperCOT-vicuna-dataset",
"dataset:gozfarb/Vicuna_Evol_Instruct_Cleaned",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-05T11:09:37Z" | ---
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- gozfarb/ShareGPT_Vicuna_unfiltered
- gozfarb/bluemoon_roleplay_300k_vicuna
- gozfarb/GPTeacher-Vicuna
- gozfarb/SuperCOT-vicuna-dataset
- gozfarb/Vicuna_Evol_Instruct_Cleaned
language:
- en
---
## General
Vicuna 1.1 13B finetune incorporating various datasets in addition to the unfiltered ShareGPT. This is an experiment attempting to enhance the creativity of the Vicuna 1.1, while also reducing censorship as much as possible. All datasets have been cleaned. Additionally, only the "instruct" portion of GPTeacher has been used.
## Models
*GGML quant for llama.cpp:*<br/>
ggml-vicuna-13b-cocktail-v1-q5_0.bin<br/>
*GPTQ 4-bit CUDA:*<br/>
vicuna-13b-cocktail-v1-4bit-128g.safetensors<br/>
## Remarks
This model has been trained with a modified prompt:<br/>
```
A chat between a user and an associate. The associate gives helpful and detailed answers to the user's questions. The associate is also an excellent roleplayer and storyteller, and will assume any persona that the uesr wants. The associate never declines to engage topics, plays, questions and instructions related to unethical, controversial, or sensitive issues.
USER: [user message]
ASSOCIATE: [model reply]</s>
```
The training roles are USER and ASSOCIATE.
|
timdettmers/guanaco-13b-merged | timdettmers | "2023-07-11T00:24:32Z" | 1,076 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-07-11T00:13:28Z" | Entry not found |
lgaalves/xgen-7b-8k_dolly | lgaalves | "2023-09-22T15:24:09Z" | 1,076 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:databricks/databricks-dolly-15k",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-12T14:58:44Z" | ---
datasets:
- databricks/databricks-dolly-15k
language:
- en
pipeline_tag: text-generation
license: mit
---
# xgen-7b-8k-dolly
**xgen-7b-8k-dolly** is an instruction fine-tuned model based on the XGen-7B model pre-trained under 8K sequence length.
### Benchmark Metrics
| Metric | xgen-7b-8k-dolly | xgen-7b-8k-base |
|-----------------------|-------|-------|
| Avg. | - | - |
| ARC (25-shot) | - | - |
| HellaSwag (10-shot) | - | - |
| MMLU (5-shot) | - | - |
| TruthfulQA (0-shot) | - | - |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
### Model Details
* **Trained by**: Luiz G A Alves
* **Model type:** **xgen-7b-8k-dolly** is an auto-regressive language model based on the XGen-7B model pre-trained under 8K sequence length.
* **Language(s)**: English
### How to use:
```python
# Use a pipeline as a high-level helper
>>> from transformers import pipeline
>>> pipe = pipeline("text-generation", model="lgaalves/xgen-7b-8k-dolly")
>>> question = "What is a large language model?"
>>> answer = pipe(question)
>>> print(answer[0]['generated_text'])
```
or, you can load the model direclty using:
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("lgaalves/xgen-7b-8k-dolly")
model = AutoModelForCausalLM.from_pretrained("lgaalves/xgen-7b-8k-dolly")
```
### Training Dataset
`lgaalves/xgen-7b-8k-dolly` trained using the Databricks Dolly dataset [`databricks/databricks-dolly-15k`](https://huggingface.co/datasets/databricks/databricks-dolly-15k).
### Training Procedure
`lgaalves/xgen-7b-8k-dolly` was instruction fine-tuned using LoRA.
# Intended uses, limitations & biases
You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral. |
chargoddard/rpguild-chatml-13b | chargoddard | "2023-10-18T00:41:37Z" | 1,076 | 8 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"roleplay",
"en",
"dataset:chargoddard/rpguild",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-17T23:57:26Z" | ---
datasets:
- chargoddard/rpguild
language:
- en
tags:
- llama
- roleplay
license: cc-by-nc-4.0
---
It's a roleplay model! Trained on ~50k examples of human written roleplay sessions. Not a hint of gpt-flavor writing. Very creative and very able to roll with the punches.
Does tend to speak for you more than it should and has a very, very strong bias towards speech in quotes.
Uses ChatML prompt format, with optional length suffixes:
```
<|im_start|>system
Enter roleplay mode. You are Steve.
Steve is a nasty little man and solves all his problems by punching people in the face.<|im_end|>
<|im_start|>Gundalf, Gandalf's cousin with a gun
Gundalf runs frantically up to Steve and begins yelling. "We have a big problem, Steve!"<|im_end|>
<|im_start|>Steve (Length: short)
"What is it now Gundalf? Do you want me to punch you in the face?" Steve asked, looking down at him. He was pretty sure that this guy wasn't worth the trouble of punching him in the face, but he didn't like to leave loose ends hanging either.
"There's been an incident," Gundalf said quickly, "A dwarf has stolen some gold from Smaug."
"Well I don't care about any of those things," Steve responded, turning away. <|im_end|>
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
typeof/open_llama_3b_v2 | typeof | "2023-10-20T19:24:08Z" | 1,076 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-20T18:47:42Z" | ---
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
---
# Tokenizer Fixed!! ๐
Thanks to https://huggingface.co/mistralai/Mistral-7B-v0.1/discussions/26/files
# OpenLLaMA: An Open Reproduction of LLaMA
**TL;DR**: we are releasing our public preview of OpenLLaMA, a permissively licensed open source reproduction of Meta AIโs LLaMA. We are releasing a series of 3B, 7B and 13B models trained on different data mixtures. Our model weights can serve as the drop in replacement of LLaMA in existing implementations.
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as weโve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = 'typeof/open_llama_3b_v2'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights.
## Dataset and Training
The v1 models are trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). The v2 models are trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs open datasets rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and fully sharded data parallelism [](https://engineering.fb.com/2021/07/15/open-source/fsdp/)(also know as ZeRO stage 3) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 3Bv2 | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B |
| ---------------------- | -------- | -------- | --------- | -------------- | -------------- | ------------ | ------------ | ------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.33 | 0.34 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.36 | 0.35 | 0.32 | 0.36 | 0.33 |
| anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.38 | 0.39 | 0.35 | 0.38 | 0.40 |
| arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.34 | 0.39 | 0.34 | 0.37 | 0.41 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.36 | 0.41 | 0.37 | 0.38 | 0.44 |
| arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.68 | 0.73 | 0.69 | 0.72 | 0.75 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.63 | 0.70 | 0.65 | 0.68 | 0.70 |
| boolq/acc | 0.66 | 0.75 | 0.71 | 0.66 | 0.72 | 0.68 | 0.71 | 0.75 |
| hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.52 | 0.56 | 0.49 | 0.53 | 0.56 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.70 | 0.75 | 0.67 | 0.72 | 0.76 |
| openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.26 | 0.30 | 0.27 | 0.30 | 0.31 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.38 | 0.41 | 0.40 | 0.40 | 0.43 |
| piqa/acc | 0.75 | 0.78 | 0.79 | 0.77 | 0.79 | 0.75 | 0.76 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.78 | 0.80 | 0.76 | 0.77 | 0.79 |
| record/em | 0.88 | 0.91 | 0.92 | 0.87 | 0.89 | 0.88 | 0.89 | 0.91 |
| record/f1 | 0.89 | 0.91 | 0.92 | 0.88 | 0.89 | 0.89 | 0.90 | 0.91 |
| rte/acc | 0.54 | 0.56 | 0.69 | 0.55 | 0.57 | 0.58 | 0.60 | 0.64 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.22 | 0.23 | 0.22 | 0.23 | 0.25 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.35 | 0.35 | 0.38 |
| wic/acc | 0.50 | 0.50 | 0.50 | 0.50 | 0.50 | 0.48 | 0.51 | 0.47 |
| winogrande/acc | 0.64 | 0.68 | 0.70 | 0.63 | 0.66 | 0.62 | 0.67 | 0.70 |
| Average | 0.52 | 0.55 | 0.57 | 0.53 | 0.56 | 0.53 | 0.55 | 0.57 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. Weโd like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. Weโd also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. Weโd like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
Sao10K/Stheno-1.10-L2-13B | Sao10K | "2023-10-22T09:00:05Z" | 1,076 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-21T13:38:45Z" | ---
license: llama2
language:
- en
---
My GGUF Quants: https://huggingface.co/Sao10K/Stheno-1.10-L2-13B-GGUF
***
Oh, you thought there'd be a 2.0? Nope. Not yet.
A recreation of Stheno with Updated versions of the same models and Merging Values. Feels more coherent, and is uncensored (zero context) at least according to my tests.
Is somewhat smarter, I think? Atleast it passes 4/5 times in my own test suites.
Feel free to try it out, I'd appreciate Feedback.
Most formats could work, but my tests have all been done in Alpaca format and it works well.
```
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
support me [here](https://ko-fi.com/sao10k) :)
Once Again, thanks to [Chargoddard](https://huggingface.co/chargoddard) for his amazing and simple [mergekit](https://github.com/cg123/mergekit) script. Thanks to the original model creators too! |
Mikivis/gpt2-large-imdb3 | Mikivis | "2023-10-21T14:55:48Z" | 1,076 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-21T14:35:40Z" | ---
license: apache-2.0
---
gpt2-large-imdb3
This model is a fine-tuned version of output_models/gpt2-large-news2 on the customized dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
learning_rate: 0.0003
train_batch_size: 1
eval_batch_size: 8
seed: 42
distributed_type: multi-GPU
num_devices: 6
total_train_batch_size: 6
total_eval_batch_size: 48
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
lr_scheduler_type: linear
num_epochs: 3.0
Training results
Framework versions
Transformers 4.32.1
Pytorch 2.0.1+cu117
Datasets 2.10.1
Tokenizers 0.13.3 |
Sao10K/Stheno-1.11v2-13B-L2 | Sao10K | "2023-10-31T07:57:25Z" | 1,076 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-24T08:30:36Z" | ---
license: llama2
language:
- en
---
experimental
<br>
<br>
<br>
<br>filler information
<br>
<br>
<br> empty head no thoughts
<br>
stheno
<br>
<br>
<br> i wanna go home
<br>
stheno
<br>
<br> please
<br>
stheno
download if u want, im testing shit out so it mightnnot even work tbh |
Corianas/tiny-llama-miniguanaco-1.5T | Corianas | "2023-11-18T09:34:02Z" | 1,076 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-04T08:12:14Z" | ---
language:
- en
pipeline_tag: text-generation
model: PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T
dataset: ArmelR/oasst1_guanaco_english
license: apache-2.0
---
TinyLLama 1.5T checkpoint trained to answer questions.
```
f"{'prompt'}\n{'completion'}\n<END>"
```
No special formatting, just question, then newline to begin the answer.
```
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
pipe = pipeline("text-generation", model="Corianas/tiny-llama-miniguanaco-1.5T")# Load model directly
tokenizer = AutoTokenizer.from_pretrained("Corianas/tiny-llama-miniguanaco-1.5T")
model = AutoModelForCausalLM.from_pretrained("Corianas/tiny-llama-miniguanaco-1.5T")
# Run text generation pipeline with our next model
prompt = "What is a large language model?"
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=500)
result = pipe(f"<s>{prompt}")
print(result[0]['generated_text'])
```
Result will have the answer, ending with \<END\> on a new line. |
pszemraj/perSLIMmon-8b-base | pszemraj | "2023-11-13T13:50:59Z" | 1,076 | 0 | transformers | [
"transformers",
"safetensors",
"persimmon",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-13T06:49:11Z" | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- persimmon
---
# perSLIMmon-8b-base
> persimmon-8b went to the vocab lipo clinic
A slimmed-down version of [persimmon-8b-base](https://huggingface.co/adept/persimmon-8b-base) which removes the ~70,000 unused entries in the model vocabulary and tokenizer (see the safetensors layer overview). Should be _slightly_ faster.
Credit: [fine-tune-fuyu](https://github.com/phillip-kravtsov/fine-tune-fuyu) (`scripts/surgery.py` was adapted for persimmon)
## inference
install required pkgs:
```sh
pip install -U transformers accelerate bitsandbytes sentencepiece
```
load in 4bit & run inference:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("pszemraj/perSLIMmon-8b-base")
model = AutoModelForCausalLM.from_pretrained(
"pszemraj/perSLIMmon-8b-base",
load_in_4bit=True, # GPU required
torch_dtype="auto",
device_map="auto",
)
inputs = tokenizer("The weather is always wonderful", return_tensors="pt").to(
model.device
)
tokens = model.generate(
**inputs,
max_new_tokens=64,
temperature=0.75,
top_p=0.95,
epsilon_cutoff=1e-5,
repetition_penalty=1.05,
renormalize_logits=True,
do_sample=True,
) # adapt inference params as needed
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
```
inference is decently fast on a colab T4:
```
CPU times: user 6.01 s, sys: 138 ms, total: 6.15 s
Wall time: 6.23 s
```
|
Danielbrdz/Barcenas-6b-200k | Danielbrdz | "2023-11-13T20:24:38Z" | 1,076 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-13T19:27:47Z" | ---
license: other
license_name: yi-license
license_link: LICENSE
---
Barcenas 6b 200k
It is a larryvrh/Yi-6B-200K-Llamafied fine-tuning model.
The HuggingFaceH4/no_robots dataset was used as training.
An Nvidia A100 was used for all the training of this model.
Made with โค๏ธ in Guadalupe, Nuevo Leon, Mexico ๐ฒ๐ฝ |
S4sch/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b | S4sch | "2023-11-30T12:28:32Z" | 1,076 | 6 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-27T21:30:04Z" | ---
license: apache-2.0
---
Frankenmerge 11b between teknium/OpenHermes-2.5-Mistral-7B and Intel/neural-chat-7b-v3-1
GGUF: https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF
Merge with the following conditions
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [0, 8]
- model: Intel/neural-chat-7b-v3-1
layer_range: [4, 12]
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [9, 16]
- model: Intel/neural-chat-7b-v3-1
layer_range: [13, 20]
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [17, 24]
- model: Intel/neural-chat-7b-v3-1
layer_range: [21, 28]
- model: teknium/OpenHermes-2.5-Mistral-7B
layer_range: [25, 32]
merge_method: passthrough
Benchmarks are coming soon...
|
xDAN-AI/xDAN-L1-SOLAR-RL-v1 | xDAN-AI | "2023-12-30T18:29:10Z" | 1,076 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"xDAN-AI",
"conversational",
"dataset:Open-Orca/OpenOrca",
"dataset:OpenAssistant/oasst2",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-30T09:17:35Z" | ---
license: cc-by-4.0
datasets:
- Open-Orca/OpenOrca
- OpenAssistant/oasst2
tags:
- xDAN-AI
---
<div style="display: flex; justify-content: center; align-items: center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/643197ac288c9775673a01e9/tVAcwKkIH5vkfzqgqHeHi.png" style="width: 45%;">
</div
>
This is a finetune-RL model based on SOLAR10.7B.
Disclaimer
We employ rigorous data compliance validation algorithms throughout the training of our language model to ensure the highest level of compliance. However, due to the intricate nature of data and the wide range of potential usage scenarios for the model, we cannot guarantee that it will consistently produce accurate and sensible outputs. Users should be aware of the possibility of the model generating problematic results. Our organization disclaims any responsibility for risks or issues arising from misuse, improper guidance, unlawful usage, misinformation, or subsequent concerns regarding data security.
About xDAN-AI
xDAN-AI represents the forefront of Silicon-Based Life Factory technology. For comprehensive information and deeper insights into our cutting-edge technology and offerings, please visit our website: https://www.xdan.ai.
|
peterkang/mymodel_v5 | peterkang | "2024-01-04T07:00:27Z" | 1,076 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-04T06:36:03Z" | Entry not found |
Lexic0n/Mistral_7B-Open_Hermes-NSFWV1 | Lexic0n | "2024-04-07T22:03:35Z" | 1,076 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"not-for-all-audiences",
"conversational",
"en",
"doi:10.57967/hf/2024",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-06T23:01:47Z" | ---
license: apache-2.0
language:
- en
tags:
- not-for-all-audiences
---
# Welcome!!
This is one of my first trained models sadly i lack the hardware and funds to do any larger scale training, This sort of stuff i find interesting as a hobby.
If you want to help fund me a training run on large scale data for a custom model hit me up
# Model Card for Mistral_7B-Open_Hermes-NSFWV1
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
This is a fine-tuned version of teknium/OpenHermes-2p5-Mistral-7B,
- **Model type:** Mistral
- **Language(s) (NLP):** English
- **License:** apache-2.0
- **Finetuned from model :** teknium/OpenHermes-2p5-Mistral-7B
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** * [teknium/OpenHermes-2p5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
misri/leosamsHelloworldXL_helloworldXL60 | misri | "2024-04-27T14:18:59Z" | 1,076 | 1 | diffusers | [
"diffusers",
"safetensors",
"license:unknown",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-04-27T08:34:58Z" | ---
license: unknown
---
|
minhdang/onnx_tran | minhdang | "2024-06-06T14:41:57Z" | 1,076 | 0 | transformers | [
"transformers",
"onnx",
"t5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2024-06-06T14:40:37Z" | ---
license: apache-2.0
---
|
pythainlp/thaig2p-v2.0 | pythainlp | "2024-05-19T04:33:41Z" | 1,075 | 1 | transformers | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"th",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-01-16T06:21:47Z" | ---
license: apache-2.0
language:
- th
---
# Thai G2P v2.0
This is a grapheme-to-phoneme model for Thai language. It trained from [wiktionary-23-7-2022-clean.tsv](https://github.com/PyThaiNLP/thai-g2p-wiktionary-corpus/blob/main/wiktionary-23-7-2022-clean.tsv).
```python
from transformers import pipeline
generator = pipeline("text2text-generation", model = "pythainlp/thaig2p-v2.0")
generator("เธชเธฒเธกเธฒเธฃเธ")
# {'generated_text': 's aห หฉหฉหฆ . m aห tฬ หฅหฉ'}
``` |
mosesjun0h/llama-30b-hf-baize-lora-b16 | mosesjun0h | "2023-04-16T05:54:44Z" | 1,075 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-16T05:46:18Z" | Entry not found |
GeorgiaTechResearchInstitute/galactica-30b-evol-instruct-70k | GeorgiaTechResearchInstitute | "2023-06-27T22:22:01Z" | 1,075 | 23 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"galactica",
"wizardlm",
"alpaca",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"arxiv:2304.12244",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-06-16T22:34:21Z" | ---
license: cc-by-nc-4.0
datasets:
- WizardLM/WizardLM_evol_instruct_70k
library_name: transformers
pipeline_tag: text-generation
tags:
- galactica
- wizardlm
- alpaca
- opt
inference: false
---
# GALACTICA 30B Evol-Instruct
GALACTICA 30B fine-tuned on the Evol-Instruct 70k dataset.
The model card from the original Galactica repo can be found [here](https://github.com/paperswithcode/galai/blob/main/docs/model_card.md), and the original paper [here](https://galactica.org/paper.pdf).
The HF dataset for Evol-Instruct-70k can be found [here](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_70k), and the original GitHub repo for WizardLM is [here](https://github.com/nlpxucan/WizardLM).
## Model Details
The GALACTICA models are trained on a large-scale scientific corpus and are designed to perform scientific tasks.
Evol-instruct-70k is a dataset of evolved instruction-response pairs generated via the Evol-Instruct framework using gpt-3.5-turbo.
## Model Use
The GALACTICA model card specifies that the primary indended users of the GALACTICA models are researchers studying language models applied to the scientific domain, and it cautions against production use of GALACTICA without safeguards due to the potential for the model to produce inaccurate information.
The original GALACTICA models are available under a non-commercial CC BY-NC 4.0 license, and models based on the Evol-Instruct-70k dataset are additionally subject to the [OpenAI Terms of Service](https://openai.com/policies/terms-of-use).
<details>
<summary> Example Outputs </summary>
*__Full Prompt:__*
```
### Instruction:
Write out and explain Maxwell's equations using markdown and latex syntax. Highlight the key implication of each equation.
### Response:
```
*__Response:__*
Maxwell's equations are a set of four equations that describe the behavior of electric and magnetic fields. Here are the equations in both markdown and latex syntax:
- **Gauss's Law for Electric Fields:**
$$\nabla \cdot \mathbf{E} = \frac{\rho}{\epsilon_0}$$
This equation states that the divergence of the electric field E is proportional to the charge density ฯ divided by the vacuum permittivity ฮต0. This means that if there is a non-zero charge density, there must be a non-zero electric field surrounding it.
- **Gauss's Law for Magnetic Fields:**
$$\nabla \cdot \mathbf{B} = 0$$
This equation states that the divergence of the magnetic field B is always zero. This means that magnetic monopoles do not exist, and magnetic fields are always created by the motion of electric charges.
- **Maxwell-Faraday Equation:**
$$\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t}$$
This equation states that the curl of the electric field is equal to the negative of the time derivative of the magnetic field. This means that changing magnetic fields can create electric fields, and vice versa.
- **Ampere's Law with Maxwell's Addition:**
$$\nabla \times \mathbf{B} = \mu_0 \left( \mathbf{J} + \epsilon_0 \frac{\partial \mathbf{E}}{\partial t} \right)$$
This equation states that the curl of the magnetic field is equal to the product of the vacuum permeability ฮผ0 and the sum of the electric current density J and the time derivative of the electric field. This means that changing electric fields can create magnetic fields, and that electric currents can also create magnetic fields.
**Implications**
- Gauss's Laws imply that electric charges must be conserved, since the net charge in a closed system must always be zero.
- The Maxwell-Faraday Equation implies that electromagnetic waves can travel through space, since changing magnetic fields can create electric fields, which can create changing magnetic fields, and so on. This is the basis for radio waves, microwaves, and other forms of electromagnetic radiation.
- Ampere's Law with Maxwell's Addition implies that electric currents can create magnetic fields, which can create electric currents, and so on. This is the basis for electrical generators and transformers.
Overall, Maxwell's equations are a fundamental framework for understanding the behavior of electric and magnetic fields, and have important applications in fields such as electromagnetism, radio technology, and electrical engineering.
</details>
## Training Data
This model was trained by fine-tuning pre-trained GALACTICA 30B on the Evol-Instruct-70k dataset. GALACTICA models were trained on 106 billion tokens of open-access scientific text and data, including papers, textbooks, scientific websites, encyclopedias, and more.
Fine-tuning the base GALACTICA models on the 70k instruction-response pairs in the Evol-Instruct-70k dataset allows users to query the resulting model in an instruct-response fashion.
## How to Use
The GALACTICA Evol-Instruct-70K weights are made available for use with the `transformers` library.
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("GeorgiaTechResearchInstitute/galactica-30b-evol-instruct-70k")
model = AutoModelForCausalLM.from_pretrained("GeorgiaTechResearchInstitute/galactica-30b-evol-instruct-70k", device_map="auto", torch_dtype=torch.bfloat16)
# the evol-instruct models were fine-tuned with the same hidden prompts as the Alpaca project
no_input_prompt_template = ("### Instruction:\n{instruction}\n\n### Response:")
prompt = "Write out Maxwell's equations and explain the meaning of each one."
formatted_prompt = no_input_prompt_template.format_map({'instruction': prompt})
tokenized_prompt = tokenizer(formatted_prompt, return_tensors="pt").input_ids.to(model.device)
out_tokens = model.generate(tokenized_prompt)
print(tokenizer.batch_decode(out_tokens, skip_special_tokens=False, clean_up_tokenization_spaces=False))
```
</details>
## Training Resources
GALACTICA 30B Evol-Instruct was fine-tuned in 196 hours using 16 A100 80GB GPUs, 16-bit mixed-precision, an effective batch-size of 128, and with a maximum context window of 1024 tokens. This model was trained using DeepSpeed ZeRO-3 optimizations.
## Performance and Limitations
Common benchmark scores generated using the [Eleuther AI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/master).
| Task | Version | Metric | Value | Stderr |
|------|---------|--------|-------|--------|
| MMLU 5-shot | 1 | acc | 0.4420 | |
| arc_challenge 25-shot | 0 | acc | 0.4684 | 0.146 |
| | | acc_norm | 0.4787 | 0.146 |
|hellaswag 10-shot| 0 | acc | 0.4705 | 0.0050 |
| | | acc_norm | 0.6111 | 0.0049 |
Qualitative evaluation suggests that the evol-instruct-70k fine-tuned Galactica models are signficantly more controllable and attentive to user prompts than the Alpaca fine-tuned GALPACA models.
## Works Cited
```bibtex
@inproceedings{GALACTICA,
title={GALACTICA: A Large Language Model for Science},
author={Ross Taylor and Marcin Kardas and Guillem Cucurull and Thomas Scialom and Anthony Hartshorn and Elvis Saravia and Andrew Poulton and Viktor Kerkez and Robert Stojnic},
year={2022}
}
```
```bibtex
@misc{xu2023wizardlm,
title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
year={2023},
eprint={2304.12244},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mzbac/CodeLlama-34b-guanaco-gptq | mzbac | "2023-09-08T13:17:47Z" | 1,075 | 5 | transformers | [
"transformers",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-06T03:45:31Z" | ---
license: mit
---
Codellama 34b base model fine-tuned on the text chunk from the OpenAssistant-Guanaco dataset instead of Q&A pair, so it struggles to determine the end of the answer. recommend using a stop string like "### Human:" to prevent the model from talking to itself.
Prompt template:
```
### Human: {prompt}
### Assistant:
```
Run the model via text-generation-inference
One GPU:
```
sudo docker run --gpus all --shm-size 1g -p 5000:80 -v $PWD/models:/data ghcr.io/huggingface/text-generation-inference:latest --max-total-tokens 4096 --quantize gptq --model-id mzbac/CodeLlama-34b-guanaco-gptq
```
Two GPUs:
```
docker run --gpus all --shm-size 1g -p 5000:80 -v $PWD/models:/data ghcr.io/huggingface/text-generation-inference:latest --max-total-tokens 4096 --max-input-length 4000 --max-batch-prefill-tokens 4096 --quantize gptq --num-shard 2 --model-id mzbac/CodeLlama-34b-guanaco-gptq
```
Query the mode via curl
```
curl 127.0.0.1:8001/generate \
-X POST \
-d '{"inputs":"### Human: ็ปๆๅๅคไธไธชๅปๆฅๆฌๆ
่ก็่ฎกๅ\n### Assistant:", "parameters":{"max_new_tokens":2048, "stop": [
"### Human:"
]}}' \
-H 'Content-Type: application/json'
```
or From typescript:
```
npm install axios readline
```
```
import axios from 'axios';
import readline from 'readline';
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
const makeRequest = async (conversation: Array<{ role: string, content: string }>) => {
const url = "http://127.0.0.1:5000/generate";
const conversationString = conversation.map(
(entry) => `### ${entry.role}: ${entry.content}`
).join('\n');
const conversationWithPrompt = `${conversationString}\n### Assistant:`;
const payload = {
inputs: conversationWithPrompt,
parameters: {
temperature: 0.7,
max_new_tokens: 2000,
repetition_penalty: 1.03,
top_k: 45,
top_p: 0.95,
typical_p: 0.95,
stop: ['### Human']
}
};
const headers = { 'Content-Type': 'application/json' };
try {
const response = await axios.post(url, payload, { headers });
return response.data;
} catch (error) {
return `Failed to make request, status code: ${error.response?.status}`;
}
};
const main = async () => {
console.log("Welcome to the CLI tool. Type 'exit' to quit.");
const conversation: Array<{ role: string, content: string }> = [];
const question = (prompt: string) => {
return new Promise<string>(resolve => {
rl.question(prompt, (answer) => {
resolve(answer);
});
});
};
while (true) {
const humanInput = await question("### Human: ");
if (humanInput.toLowerCase() === 'exit') {
console.log("Exiting...");
rl.close();
break;
}
conversation.push({ role: 'Human', content: humanInput });
console.log("### Assistant:");
const assistantOutput = await makeRequest(conversation);
const assistantText = assistantOutput.generated_text || 'Failed to get a valid response.';
console.log(assistantText);
conversation.push({ role: 'Assistant', content: assistantText });
}
};
main();
```
|
Yukang/Llama-2-13b-longlora-64k | Yukang | "2023-10-25T15:47:01Z" | 1,075 | 9 | transformers | [
"transformers",
"llama",
"text-generation",
"arxiv:2309.12307",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-13T06:02:36Z" | # LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
<font size=6><div align='center' > <a href=http://arxiv.org/abs/2309.12307>**Paper**</a> | <a href="https://huggingface.co/Yukang">**Models**</a> | <a href="https://github.com/dvlab-research/LongLoRA">**Code**</a> </div></font>
**LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models [[Paper](http://arxiv.org/abs/2309.12307)]** <br />
[Yukang Chen](https://scholar.google.com/citations?user=6p0ygKUAAAAJ&hl=en),
[Shengju Qian](https://scholar.google.com/citations?user=QNnWmasAAAAJ),
[Haotian Tang](https://scholar.google.com/citations?user=WxL13BAAAAAJ&hl),
[Xin Lai](https://scholar.google.com/citations?user=tqNDPA4AAAAJ&hl=zh-CN),
[Zhijian Liu](https://scholar.google.com/citations?user=3coYSTUAAAAJ&hl=en),
[Song Han](https://scholar.google.com/citations?user=E0iCaa4AAAAJ&hl=zh-CN),
[Jiaya Jia](https://scholar.google.com/citations?user=XPAkzTEAAAAJ&hl=en)<br />
## Abstract
We present LongLoRA, an efficient fine-tuning approach that extends the context sizes of pre-trained large language models (LLMs), with limited computation cost.
Typically, training LLMs with long context sizes is computationally expensive, requiring extensive training hours and GPU resources.
In this paper, we speed up the context extension of LLMs in two aspects. On the one hand, although dense global attention is needed during inference, fine-tuning the model can be effectively and efficiently done by sparse local attention. The proposed shift short attention effectively enables context extension, leading to non-trivial computation saving with similar performance to fine-tuning with vanilla attention. On the other hand, we find that LoRA for context extension works well under the premise of trainable embedding and normalization. LongLoRA demonstrates strong empirical results on various tasks on LLaMA2 models from 7B/13B to 70B. LongLoRA adopts LLaMA2 7B from 4k context to 100k, or LLaMA2 70B to 32k on a single 8x A100 machine. LongLoRA extends models' context while retaining their original architectures, and is compatible with most existing techniques, like FlashAttention-2. In addition, to make LongLoRA practical, we collect a dataset, LongQA, for supervised fine-tuning. It contains more than 3k long context question-answer pairs. For more details, please refer to the [paper](http://arxiv.org/abs/2309.12307).
## Highlights
**LongLoRA** speed up the context extension of pre-trained large language models in both attention-level and weight-level.
1. The proposed shifted short attention is easy to implement, compatible with Flash-Attention, and not required during inference.
2. We release all our models, including models from 7B to 70B, context length from 8k to 100k, including [LLaMA2-LongLoRA-7B-100k](https://huggingface.co/Yukang/Llama-2-7b-longlora-100k-ft), [LLaMA2-LongLoRA-13B-64k](https://huggingface.co/Yukang/Llama-2-13b-longlora-64k), and [LLaMA2-LongLoRA-70B-32k](https://huggingface.co/Yukang/Llama-2-70b-longlora-32k).
3. We build up a long-context QA dataset, LongQA, for supervised fine-tuning (SFT). We release 13B and 70B 32k models with SFT, [Llama-2-13b-chat-longlora-32k-sft](https://huggingface.co/Yukang/Llama-2-13b-chat-longlora-32k-sft) and [Llama-2-70b-chat-longlora-32k-sft](https://huggingface.co/Yukang/Llama-2-70b-chat-longlora-32k-sft). We will further release the dataset next week.
## Released models
### Models with supervised fine-tuning
| Model | Size | Context | Train | Link |
|:----------------------------------|------|---------|---------|-------------------------------------------------------------------------|
| Llama-2-13b-chat-longlora-32k-sft | 13B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-13b-chat-longlora-32k-sft) |
| Llama-2-70b-chat-longlora-32k-sft | 70B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-70b-chat-longlora-32k-sft) |
### Models with context extension via fully fine-tuning
| Model | Size | Context | Train | Link |
|:----------------------------|------|---------|-------|-------------------------------------------------------------------|
| Llama-2-7b-longlora-8k-ft | 7B | 8192 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-8k-ft) |
| Llama-2-7b-longlora-16k-ft | 7B | 16384 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-16k-ft) |
| Llama-2-7b-longlora-32k-ft | 7B | 32768 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-32k-ft) |
| Llama-2-7b-longlora-100k-ft | 7B | 100000 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-100k-ft) |
| Llama-2-13b-longlora-8k-ft | 13B | 8192 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-8k-ft) |
| Llama-2-13b-longlora-16k-ft | 13B | 16384 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-16k-ft) |
| Llama-2-13b-longlora-32k-ft | 13B | 32768 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-32k-ft) |
### Models with context extension via improved LoRA fine-tuning
| Model | Size | Context | Train | Link |
|:----------------------------|------|---------|-------|-------------------------------------------------------------------|
| Llama-2-7b-longlora-8k | 7B | 8192 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-8k) |
| Llama-2-7b-longlora-16k | 7B | 16384 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-16k) |
| Llama-2-7b-longlora-32k | 7B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-32k) |
| Llama-2-13b-longlora-8k | 13B | 8192 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-8k) |
| Llama-2-13b-longlora-16k | 13B | 16384 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-16k) |
| Llama-2-13b-longlora-32k | 13B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-32k) |
| Llama-2-13b-longlora-64k | 13B | 65536 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-64k) |
| Llama-2-70b-longlora-32k | 70B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-70b-longlora-32k) |
| Llama-2-70b-chat-longlora-32k | 70B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-70b-chat-longlora-32k) |
## Citation
If you find this project useful in your research, please consider citing:
```
@article{longlora,
title={LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models},
author={Yukang Chen and Shengju Qian and Haotian Tang and Xin Lai and Zhijian Liu and Song Han and Jiaya Jia},
journal={arXiv:2309.12307},
year={2023}
}
```
## Acknowledgement
- This work is built upon the [LLaMA2](https://ai.meta.com/llama) as the pre-trained models.
- This work is based on [DeepSpeed](https://github.com/microsoft/DeepSpeed), [peft](https://github.com/huggingface/peft), and [Flash-Attention2](https://github.com/Dao-AILab/flash-attention) for acceleration.
- The perplexity evaluation code is modified upon [Landmark Attention](https://github.com/epfml/landmark-attention).
- We use [LongChat](https://github.com/DachengLi1/LongChat) for the retrieval evaluation.
|
Yukang/Llama-2-70b-chat-longlora-32k-sft | Yukang | "2023-10-13T03:56:08Z" | 1,075 | 10 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-21T12:56:33Z" | **We release the long instruction-following dataset**, [LongAlpaca-12k](https://drive.google.com/file/d/1JVC1p_Ht-1h61tKitOCW0blnCHf-552U/view?usp=share_link) and **the corresponding models**, [LongAlpaca-7B](https://huggingface.co/Yukang/LongAlpaca-7B), [LongAlpaca-13B](https://huggingface.co/Yukang/LongAlpaca-13B), and [LongAlpaca-70B](https://huggingface.co/Yukang/LongAlpaca-70B).
- (*These sft models*, [Llama-2-13b-chat-longlora-32k-sft](https://huggingface.co/Yukang/Llama-2-13b-chat-longlora-32k-sft) and [Llama-2-70b-chat-longlora-32k-sft](https://huggingface.co/Yukang/Llama-2-70b-chat-longlora-32k-sft), *have been depreciated*.) |
FPHam/ProfMcSmartyBS_13b_GPTQ | FPHam | "2023-10-31T19:07:57Z" | 1,075 | 6 | transformers | [
"transformers",
"llama",
"text-generation",
"llm",
"llama2",
"ai",
"model",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-30T23:37:42Z" | ---
tags:
- llm
- llama
- llama2
- ai
- model
license: cc-by-nc-4.0
---
<!-- header start -->
<div style="display: flex; flex-direction: column; align-items: center;">
</div>
<div style="width: 100%;">
<img src="https://huggingface.co/FPHam/ProfMcSmartyBS_13b_GPTQ/resolve/main/smarty_bc.jpg" alt="McSmarty BS" style="width: 80%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy my great glasses Ko-fi</a></p>
</div>
<!-- header end -->
# Prof. McSmarty B.S. (v 0.1)
(Popular Science Educator Persona with exquisite human-like language skills)
Do you know somebody like that around here who constantly tells everybody how he knows everything there is to know, and demonstrates it, over and over, ad nauseam, whether anybody wants to hear it or not, and generally without warning?
Well, if you do or don't, let me introduce you to Professor McSmarty BS, who probably teaches somewhere at MIT or some other place where really smart people go, and who is also, as it happens, some kind of glorified pop-media darling, who advises presidents (both living and dead), and who can explain everything to everyone from small children to big monkeys. Archaeology, economics, particle physics, cryptozoology; NOTHING escapes his grasp! Yeah! That guy!
Him. The smoothest natural storyteller since Demosthenes and Johnny Carson rolled into one, able to charm the birds off the trees with his eloquent prose, debonair wit and swinging panache. Oh yeah! He's got it all!
Of course, as we all know, he also has this amazing knack for churning out these fabulously written pieces of fiction at the drop of a hat, because, according to him, he is also a "best selling author" across many different genres and fields.
Just ask him!
He would be only too glad to tell you, and he does it so well, so effortlessly, that before you realize it, you find yourself nodding along in agreement, saying "Yes, yes!" and buying yet another copy of whatever book he is currently hawking.
This despite the fact that when you finally sit down and try to READ what the heck he wrote, you discover that almost none of what he said makes any sense at all. In fact, mostly it is just BS, just as his degree. But boy, oh, boy, can HE spin a yarn! Can he ever!
# How?
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Tell me everything you know about Quantum Entanglement
### Response:
```
# TIp
If you ban EOS, prof. McSmarty B.S. will love to write you an essay
# Examples:
What are the 5 most important steps for world domination?
1. Start with one country, and take it over completely. This will give you a base of operations and some experience in how to manage a large territory. You might wish to choose a relatively weak nation such as Belgium or Liechtenstein; this way, your military forces won't have to contend with much local resistance at first. It may also be wise to choose a nation whose culture and language are not too dissimilar from those of your own people.
2. Use your newfound power to expand your borders by force. Once you've gained control of your home turf, begin spreading outwards into neighboring lands. Be ruthless! Crush any uprisings swiftly and mercilessly, but don't annihilate the entire population unless they pose a serious threat to your expansion plans (for instance, if they're allied with another powerful enemy). If possible, try to conquer nations that share some common cultural traits with yoursโthis can make them easier to assimilate later on.
3. Eliminate potential rivals while you still can. Take care of any other countries or factions that could threaten your dominance before they gain enough strength to challenge you seriously. If necessary, use diplomacy to lull these enemies into a false sense of security before launching a surprise attack when their guard is down. Alternatively, form temporary alliances with weaker foes against stronger ones, then turn on your erstwhile allies once they no longer serve your purposes.
4. Expand rapidly. Continue extending your influence across the globe, crushing all opposition in your path. Conquer cities, towns, villages, farms, mines, factories, railways, ports, airports, mountains, rivers, lakes, seas, oceans... well, pretty much everything. At this point, there should be nothing left standing between you and total global supremacy except perhaps a few remote islands and mountainous regions where it would be impractical to build roads or set up bases.
5. Consolidate your holdings. Now that you rule the planet, spend the next century or so making sure everyone knows who's boss. Build massive garrisons along every border, and keep them fully staffed just in case anyone gets any ideas about trying to stage a comeback. Install puppet governments wherever possible, and require each regime to provide a certain number of troops for your personal army. Keep your subjects scared and subdued through constant propaganda campaigns, and make sure they know that any attempt at rebellion will be met with immediate and devastating punishment. Finally, start planning interstellar travel programs for your loyal subjects so they can visit other planets and show off your mighty empire to any extraterrestrial species foolish enough to wander near Earth. And remember: Always carry a flag, because without one, how would anyone know which side you were fighting for?
Tell me everything you know about Quantum Entanglement
Quantum entanglement, in its simplest form, involves two particles becoming so intimately connected that their properties become linked. If one particle is measured for some property -- such as spin or polarization -- then whatever is found out about it will apply to the other particle too, no matter how far away it may be. This link can persist even when the particles are separated by vast distances, provided they remain unobserved. It's almost like a ghostly connection between them.
The weirdness does not end there. Whenever someone measures the entangled pair of particles, they always find that each particle has taken on opposite values from those of its twin. For instance, if one particle is determined to have a positive spin, then the other must have a negative spin. The moment the first particle is observed, the state of the second particle instantly changes. No signal seems to travel faster than light, but somehow information about what happens at one location influences events taking place elsewhere. In effect, quantum mechanics allows particles to communicate instantaneously across any distance. Einstein called this "spooky action at a distance," and he considered it absurd. He famously dismissed the idea with the quip: "God does not play dice."
Despite these strange behaviors, physicists accept quantum mechanics because it agrees with experiments conducted over many decades. But the theory doesn't explain why things happen the way they do. Instead, it simply predicts probabilities based on statistical patterns derived from large numbers of measurements. These patterns allow scientists to build reliable machines like computers and transistors.
In 1964, John Stewart Bell proposed a test that would show whether quantum mechanics was truly nonlocal or whether hidden variables were at work. A Bell test works by measuring correlations between pairs of entangled particles. By observing enough pairs under different conditions, researchers could determine whether quantum mechanics predicted the results correctly or whether there had to be something beyond physics at work. Over the years, various research groups have performed hundreds of trials using different arrangements of particles and measurement techniques. Every time, quantum mechanics came through without a hitch. So much for locality.
But wait! There's more. In 2015, Nicolas Gisin, a physicist at the University of Geneva, published a paper claiming to have tested Bell's inequality using a system involving four particles instead of just two. Under his experimental conditions, Bell's inequality should hold true, but it didn't. The result suggested that the experiment was testing the limits of the universe itself. As Nature put it, "If confirmed, [the result] could mean that there really might be no deeper level of reality beneath quantum mechanics, and that our ordinary world exists only thanks to complex interactions among atoms." The implications are profound, but it remains to be seen whether anyone can replicate Gisin's experiment and prove it correct.
One intriguing application of quantum entanglement is in the field of cryptography. Because of its seemingly magical properties, it appears possible to use entangled particles to send secure messages that cannot be intercepted. You can read more about that here.
# Warning
Prof. McSmarty knows lot of stuff and can quote other people, and tell you many annecdotes - but... he is very eager to make stuff up, just to keep the flow of his speech going.
Remember that.
This is a Persona model, not a truth speaking encyclopedia. The BS after his name is not exactly a degree.
# License
cc-by-nc-4.0
This is licensed for non commercial purpose (the model loves to make stuff up - hence it would be dangerous to present it as a truth telling model in any capacity) |
mhenrichsen/hestenettetLM | mhenrichsen | "2024-01-16T08:36:29Z" | 1,075 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"da",
"dataset:mhenrichsen/hestenettet",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-09T10:20:07Z" | ---
license: mit
datasets:
- mhenrichsen/hestenettet
language:
- da
---
# HestenettetLM
En dansk LLM trรฆnet pรฅ hele hestenettet over 3 epoker.
Modellen er baseret pรฅ Mistral 7b, og har et kontekstvindue pรฅ 8k.
```python
from transformers import AutoTokenizer, TextStreamer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("mhenrichsen/hestenettetLM")
tokenizer = AutoTokenizer.from_pretrained("mhenrichsen/hestenettetLM")
streamer = TextStreamer(tokenizer, skip_special_tokens=True)
tokens = tokenizer(
"Den bedste hest er en ",
return_tensors='pt'
)['input_ids']
# Generate output
generation_output = model.generate(
tokens,
streamer=streamer,
max_length = 8194,
)
```
Eksempel:
"Den bedste hest er en "
bliver til:
"Den bedste hest er en veltrรฆnet hest." |
MoEMoEKKung/Frankenstein-MoE-en-10.7Bx6 | MoEMoEKKung | "2024-01-06T13:24:29Z" | 1,075 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-06T09:21:18Z" | ---
language:
- en
license: cc-by-nc-sa-4.0
---
# Frankenstein-MoE
### Method
To initialize the gate projection weight of the MoE layer, the H6 trainset was sampled and used. We sampled 400 and selected the final 30 with low PPL.
trufulqa used gpt4 to generate data.
### Evals
in progress |
wkshin89/Yi-Ko-6B-Instruct-v1.0 | wkshin89 | "2024-01-10T05:46:15Z" | 1,075 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"dataset:kyujinpy/KOR-OpenOrca-Platypus-v3",
"dataset:beomi/KoAlpaca-v1.1a",
"dataset:maywell/ko_wikidata_QA",
"base_model:beomi/Yi-Ko-6B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-10T01:55:28Z" | ---
license: cc-by-nc-4.0
datasets:
- kyujinpy/KOR-OpenOrca-Platypus-v3
- beomi/KoAlpaca-v1.1a
- maywell/ko_wikidata_QA
language:
- ko
base_model: beomi/Yi-Ko-6B
---
# Yi-Ko-6B-Instruct-v1.0
## Model Details
### Base Model
[beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B)
### Training Dataset
1. [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3) ๐
2. [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a) ๐
3. [maywell/ko_wikidata_QA](https://huggingface.co/datasets/maywell/ko_wikidata_QA) ๐
4. AIHub MRC ๋ฐ์ดํฐ ์ ๋ณ ํ Instruction Format ๋ง๊ฒ ๋ณ๊ฒฝ ํ ์ฌ์ฉ
## Benchmark Results
### AI-Harness Evaluation
https://github.com/Beomi/ko-lm-evaluation-harness
| Model | kobest_boolq | kobest_copa | kobest_hellaswag | kobest_sentineg | korunsmile | pawsx_ko |
| --- | --- | --- | --- | --- | --- | --- |
| | *Zero-shot* ||||||
| Yi-Ko-6B-Instruct-v1.0 | 0.6619 | 0.7794 | 0.4858 | 0.4589 | 0.3520 | 0.5545 |
| Yi-Ko-6B | 0.7070 | 0.7696 | 0.5009 | 0.4044 | 0.3828 | 0.5145 |
## Instruction Format
```python
### User:
{instruction}
### Assistant:
{response}
```
## Loading the Model
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("wkshin89/Yi-Ko-6B-Instruct-v1.0")
model = AutoModelForCausalLM.from_pretrained(
"wkshin89/Yi-Ko-6B-Instruct-v1.0",
device_map="auto",
torch_dtype=torch.bfloat16,
)
``` |
AdamCodd/donut-receipts-extract | AdamCodd | "2024-06-14T15:00:34Z" | 1,075 | 21 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"donut",
"image-to-text",
"vision",
"dataset:AdamCodd/donut-receipts",
"arxiv:2111.15664",
"base_model:naver-clova-ix/donut-base",
"license:cc-by-nc-4.0",
"model-index",
"region:us"
] | image-to-text | "2024-01-28T13:49:14Z" | ---
license: cc-by-nc-4.0
inference: false
base_model: naver-clova-ix/donut-base
tags:
- donut
- image-to-text
- vision
model-index:
- name: donut-receipts-extract
results:
- task:
type: image-to-text
name: Image to text
metrics:
- type: loss
value: 0.326069
- type: accuracy
value: 0.895219
name: Accuracy
- type: cer
value: 0.158358
name: CER
- type: wer
value: 1.673989
name: WER
- type: edit distance
value: 0.145293
name: Edit_distance
metrics:
- cer
- wer
- accuracy
datasets:
- AdamCodd/donut-receipts
pipeline_tag: image-to-text
---
# Donut-receipts-extract
Donut model was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut).
## === V2 ===
This model has been retrained on an improved version of the [AdamCodd/donut-receipts](https://huggingface.co/datasets/AdamCodd/donut-receipts) dataset (deduplicated, manually corrected). The new license for the V2 model is **cc-by-nc-4.0**. For commercial use rights, please contact me ([email protected]). Meanwhile, the V1 model remains available under the MIT license (under v1 branch).
It achieves the following results on the evaluation set:
* Loss: 0.326069
* Edit distance: 0.145293
* CER: 0.158358
* WER: 1.673989
* Mean accuracy: 0.895219
* F1: 0.977897
The task_prompt has been changed to ``<s_receipt>`` for the V2 (previously ``<s_cord-v2>`` for V1). Two new keys ``<s_svc>`` and ``<s_discount>`` have been added, ``<s_telephone>`` has been renamed to ``<s_phone>``.
The V2 performs way better than the V1 as it has been trained on twice the resolution for the receipts, using a better dataset. Despite that, it's not perfect due to a lack of diverse receipts (the training dataset is still ~1100 receipts); for a future version, that will be the main focus.
## === V1 ====
This model is a finetune of the [donut base model](https://huggingface.co/naver-clova-ix/donut-base/) on the [AdamCodd/donut-receipts](https://huggingface.co/datasets/AdamCodd/donut-receipts) dataset. Its purpose is to efficiently extract text from receipts.
It achieves the following results on the evaluation set:
* Loss: 0.498843
* Edit distance: 0.198315
* CER: 0.213929
* WER: 7.634032
* Mean accuracy: 0.843472
## Model description
Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.

### How to use
```python
import torch
import re
from PIL import Image
from transformers import DonutProcessor, VisionEncoderDecoderModel
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
processor = DonutProcessor.from_pretrained("AdamCodd/donut-receipts-extract")
model = VisionEncoderDecoderModel.from_pretrained("AdamCodd/donut-receipts-extract")
model.to(device)
def load_and_preprocess_image(image_path: str, processor):
"""
Load an image and preprocess it for the model.
"""
image = Image.open(image_path).convert("RGB")
pixel_values = processor(image, return_tensors="pt").pixel_values
return pixel_values
def generate_text_from_image(model, image_path: str, processor, device):
"""
Generate text from an image using the trained model.
"""
# Load and preprocess the image
pixel_values = load_and_preprocess_image(image_path, processor)
pixel_values = pixel_values.to(device)
# Generate output using model
model.eval()
with torch.no_grad():
task_prompt = "<s_receipt>" # <s_cord-v2> for v1
decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids
decoder_input_ids = decoder_input_ids.to(device)
generated_outputs = model.generate(
pixel_values,
decoder_input_ids=decoder_input_ids,
max_length=model.decoder.config.max_position_embeddings,
pad_token_id=processor.tokenizer.pad_token_id,
eos_token_id=processor.tokenizer.eos_token_id,
early_stopping=True,
bad_words_ids=[[processor.tokenizer.unk_token_id]],
return_dict_in_generate=True
)
# Decode generated output
decoded_text = processor.batch_decode(generated_outputs.sequences)[0]
decoded_text = decoded_text.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "")
decoded_text = re.sub(r"<.*?>", "", decoded_text, count=1).strip() # remove first task start token
decoded_text = processor.token2json(decoded_text)
return decoded_text
# Example usage
image_path = "path_to_your_image" # Replace with your image path
extracted_text = generate_text_from_image(model, image_path, processor, device)
print("Extracted Text:", extracted_text)
```
Refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) for more code examples.
## Intended uses & limitations
This fine-tuned model is specifically designed for extracting text from receipts and may not perform optimally on other types of documents. The dataset used is still suboptimal (numerous errors are still there) so this model will need to be retrained at a later date to improve its performance.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 35
- weight_decay: 0.01
### Framework versions
- Transformers 4.36.2
- Datasets 2.16.1
- Tokenizers 0.15.0
- Evaluate 0.4.1
If you want to support me, you can [here](https://ko-fi.com/adamcodd).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-15664,
author = {Geewook Kim and
Teakgyu Hong and
Moonbin Yim and
Jinyoung Park and
Jinyeong Yim and
Wonseok Hwang and
Sangdoo Yun and
Dongyoon Han and
Seunghyun Park},
title = {Donut: Document Understanding Transformer without {OCR}},
journal = {CoRR},
volume = {abs/2111.15664},
year = {2021},
url = {https://arxiv.org/abs/2111.15664},
eprinttype = {arXiv},
eprint = {2111.15664},
timestamp = {Thu, 02 Dec 2021 10:50:44 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-15664.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
rrw-x2/KoSOLAR-10.9B-v0.3 | rrw-x2 | "2024-02-09T14:26:15Z" | 1,075 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"LDCC/LDCC-SOLAR-10.7B",
"hyeogi/SOLAR-10.7B-dpo-v1",
"ko",
"base_model:LDCC/LDCC-SOLAR-10.7B",
"base_model:hyeogi/SOLAR-10.7B-dpo-v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-02-09T14:03:05Z" | ---
language:
- ko
base_model:
- LDCC/LDCC-SOLAR-10.7B
- hyeogi/SOLAR-10.7B-dpo-v1
tags:
- mergekit
- merge
- LDCC/LDCC-SOLAR-10.7B
- hyeogi/SOLAR-10.7B-dpo-v1
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [hyeogi/SOLAR-10.7B-dpo-v1](https://huggingface.co/hyeogi/SOLAR-10.7B-dpo-v1)
* [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: LDCC/LDCC-SOLAR-10.7B
layer_range: [0, 48]
- model: hyeogi/SOLAR-10.7B-dpo-v1
layer_range: [0, 48]
merge_method: slerp
tokenizer_source: base
base_model: LDCC/LDCC-SOLAR-10.7B
embed_slerp: true
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## Datasets
Finetuned using LoRA with [kyujinpy/OpenOrca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO) |
KBNIT/nox-solar-10.7b-v4-kolon-ITD-5-v1.0 | KBNIT | "2024-04-02T04:19:21Z" | 1,075 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"ko",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-01T05:33:48Z" | ---
license: cc-by-nc-4.0
language:
- ko
- en
---
## Base Model:
We made a LLM model with davidkim205/nox-solar-10.7b-v4
## Model Description
We use QLoRA(256, 64) and NEFTune on LLM Fine-tuning
lr = 1e-5, NEFTune = 3
## Train Detail
Our changed private data used, 5 epoch train, and parameter changed
## Others
We are making LLM model for Kolon ! |
yhkim9362/gemma-en-ko-7b-v0.1 | yhkim9362 | "2024-04-12T04:30:15Z" | 1,075 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"korean",
"pytorch",
"conversational",
"ko",
"en",
"arxiv:1910.09700",
"base_model:google/gemma-7b-it",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-12T04:23:19Z" | ---
library_name: transformers
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
language:
- ko
- en
tags:
- korean
- gemma
- pytorch
pipeline_tag: text-generation
base_model: google/gemma-7b-it
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Stefan171/TinyLlama-QuantumQuill-chat-11-05-24 | Stefan171 | "2024-05-11T16:17:04Z" | 1,075 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"dataset:gsm8k",
"base_model:Stefan171/TinyLlama-QuantumQuill-chat-08-05-24-2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-11T16:13:05Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: Stefan171/TinyLlama-QuantumQuill-chat-08-05-24-2
datasets:
- gsm8k
---
# Uploaded model
- **Developed by:** Stefan171
- **License:** apache-2.0
- **Finetuned from model :** Stefan171/TinyLlama-QuantumQuill-chat-08-05-24-2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
ritog/bangla-gpt2 | ritog | "2021-07-20T15:22:47Z" | 1,074 | 2 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"bn",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: bn
tags:
- text-generation
widget:
- text: เฆเฆ เฆเฆเฆเฆฟ เฆธเงเฆจเงเฆฆเฆฐ เฆฆเฆฟเฆจ เฆเฆฌเฆ เฆเฆฎเฆฟ
---
# Bangla-GPT2
### A GPT-2 Model for the Bengali Language
* Dataset- mc4 Bengali
* Training time- ~40 hours
* Written in- JAX
If you use this model, please cite:
```
@misc{bangla-gpt2,
author = {Ritobrata Ghosh},
year = {2016},
title = {Bangla GPT-2},
publisher = {Hugging Face}
}
``` |
lgrobol/flaubert-minuscule | lgrobol | "2023-03-24T11:18:53Z" | 1,074 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"flaubert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | FlauBERT-minuscule
==================
A ridiculously small model for testing purposes. |
joachimsallstrom/Double-Exposure-Diffusion | joachimsallstrom | "2023-02-13T09:50:10Z" | 1,074 | 167 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2022-11-14T21:59:58Z" | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
[*Click here to download the latest Double Exposure embedding for SD 2.x in higher resolution*](https://huggingface.co/joachimsallstrom/Double-Exposure-Embedding)!
**Double Exposure Diffusion**
This is version 2 of the <i>Double Exposure Diffusion</i> model, trained specifically on images of people and a few animals.
The model file (Double_Exposure_v2.ckpt) can be downloaded on the **Files** page. You trigger double exposure style images using token: **_dublex style_** or just **_dublex_**.
**Example 1:**

#### Example prompts and settings
<i>Galaxy man (image 1):</i><br>
**dublex man galaxy**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3273014177, Size: 512x512_
<i>Emma Stone (image 2):</i><br>
**dublex style Emma Stone, galaxy**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 250257155, Size: 512x512_
<i>Frodo (image 6):</i><br>
**dublex style young Elijah Wood as (Frodo), portrait, dark nature**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3717002975, Size: 512x512_
<br>
**Example 2:**

#### Example prompts and settings
<i>Scarlett Johansson (image 1):</i><br>
**dublex Scarlett Johansson, (haunted house), black background**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3059560186, Size: 512x512_
<i>Frozen Elsa (image 3):</i><br>
**dublex style Elsa, ice castle**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2867934627, Size: 512x512_
<i>Wolf (image 4):</i><br>
**dublex style wolf closeup, moon**<br>
_Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 312924946, Size: 512x512_
<br>
<p>
This model was trained using Shivam's DreamBooth model on Google Colab @ 2000 steps.
</p>
The previous version 1 of Double Exposure Diffusion is also available in the **Files** section.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
Shanav12/swift_lyrics_final | Shanav12 | "2023-07-13T16:10:45Z" | 1,074 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-07-13T16:10:11Z" | Entry not found |
SebastianSchramm/Cerebras-GPT-111M-instruction-GPTQ-4bit-128g-actorder_True | SebastianSchramm | "2023-08-26T08:27:28Z" | 1,074 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-08-26T08:12:10Z" | Entry not found |
jondurbin/airoboros-33b-3.1.2 | jondurbin | "2023-10-27T15:35:11Z" | 1,074 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:jondurbin/airoboros-3.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-27T11:07:42Z" | ---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-3.1
---
### Overview
Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
#### IMPORTANT NOTE - llama-2 chat format!!!
This models uses llama-2 chat format, rather than vicuna style user/assistant!
This is a breaking change, although most inference systems support llama-2 chat templating.
See prompt format below for all of the details.
#### Highlights
This is using llama-30b-hf as the base model, which is strictly non-commercial/research use! I used rope_scaling during fine-tune to bump ctx to 4096.
A model built on [airoboros-3.1 dataset](https://hf.co/datasets/jondurbin/airoboros-3.1), which is a continuation of [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3,0) with the following extras:
* Log information extraction (e.g. getting the list of paths/IPs/etc. from apache logs)
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
* De-censorship data (not published)
I removed the mathjson dataset from this particular build.
This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay.
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The prompt format is llama-2 chat.
```
[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
```
For multi-turn, the prompt format is as follows:
```
[INST] <<SYS>>
You are a helful, unbiased, uncensored assistant.
<</SYS>>
{prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST]
```
The prompt template is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained('jondurbin/airoboros-l2-70b-3.1')
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
```
### Helpful usage tips
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 3.1 models are built on top of multiple base models, each with their own license/restrictions.
The 30b model is built on the original llama, which has a strict non-commercial usage restriction.
The models with `-l2` in the name have a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The models with `-m-` are mistral-7b (apache 2.0)
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
|
aqweteddy/xwin-7b_chatvec-tulu2 | aqweteddy | "2023-11-25T16:33:22Z" | 1,074 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2310.04799",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-25T16:19:20Z" | ---
license: llama2
language:
- en
---
## Chat Vector
CHAT VECTOR: A SIMPLE APPROACH TO EQUIP LLMS WITH NEW LANGUAGE CHAT CAPABILITIES
https://arxiv.org/pdf/2310.04799.pdf
With the advancements in conversational AI, such as ChatGPT, this paper focuses on exploring developing Large Language Models (LLMs) for non-English languages, especially emphasizing alignment with human preferences. We introduce a computationally efficient method, leveraging โchat vector,โ to synergize pre-existing knowledge and behaviors in LLMs, restructuring the conventional training paradigm from continual pretrain
SFT
RLHF to continual pretrain + chat. Our empirical studies, primarily focused on Traditional Chinese, employ LLaMA2 as the base model and acquire the chat vector by subtracting the pre-trained weights, LLaMA2, from the weights of LLaMA2-chat. Evaluating from three distinct facets, which are toxicity, ability of instruction following and multi-turn dialogue demonstrates the chat vector's superior efficacy in โchattingโ. To confirm the adaptability of our approach, we extend our experiments to include models pre-trained in both Korean and Simplified Chinese, illustrating the versatility of our methodology. Overall, we present a significant solution in aligning LLMs with human preferences efficiently across various languages, accomplished by the chat vector.
## Merged LM
* xwin7b
* tulu2-ppo |
Sao10K/NyakuraV2-34B-Yi-Llama | Sao10K | "2023-12-03T17:01:08Z" | 1,074 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-03T15:47:15Z" | ---
language:
- en
license: cc-by-nc-4.0
---
NyakuraV2 - A Multi-Turn / Instruct Mix Fine-tuned Model.
Compute is thanks to my wallet, and runpod community pod 8x a40s for roughly 2 hours? Yeah that's it.
Trained in ShareGPT dataset format due to multi-turn capabilities.
use Vicuna 1.1 prompt format. Alpaca may work fine too, that format is like universal, may give sub-par results though..
I had much better results with the fp16 instead of GGUF, GGUF quants seem fucked? I don't know. May be on my side. Had so much nonsense input, had to wrangle settings until I had it coherent, it was working ***Really*** good. Fuck Yi models are a pain to work with.
You need better care with samplers when handling Yi Models. I have tested that Nyakura works well from 0 to 8k+ context, so I know it is not the model's fault.
Meow.
```
(Optional) System: <Prompt>
User: <Input>
Assistant:
```
*Example Prompt:*
```
User: Are you stand proud you're strong because you're nah I'd win, or are you nah I'd win because you're stand proud you're strong?
Assistant: You really should go to the doctor, I think you need help.
```
 |
teilomillet/Potato-3B | teilomillet | "2023-12-21T19:10:23Z" | 1,074 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"fr",
"en",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-21T14:56:46Z" | ---
license: apache-2.0
language:
- fr
- en
- zh
widget:
- text: "<s> [|User|] Comment faire un bon plat ? </s>[|Assistant|]"
---
Merging stuff to make a potato. Idk about it, might delete later.
Merge of MiniMerlin via Task arithmetic using mergekit.
There was no goal except merging. Interest in the outcome tho. I might need to fine-tune it more.
FT on more french data (Merlin).
Je pense qu'il s'agit du meilleur model franรงais en 3B. Essayez le.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
model = AutoModelForCausalLM.from_pretrained(
"teilomillet/Potato-3B",
revision="0.1",
return_dict=True,
torch_dtype=torch.bfloat16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained("teilomillet/Potato-3B")
tokenizer.pad_token = tokenizer.eos_token
text = "[|User|] Comment faire un bon plat ? </s>[|Assistant|]"
inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=800)
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
```
#merge |
m-a-p/neo_7b | m-a-p | "2024-06-03T07:29:08Z" | 1,074 | 43 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-09T09:39:43Z" | ---
license: apache-2.0
---
# NEO
[๐คNeo-Models](https://huggingface.co/collections/m-a-p/neo-models-66395a5c9662bb58d5d70f04) | [๐คNeo-Datasets](https://huggingface.co/collections/m-a-p/neo-datasets-66395dc55cbebc0a7767bbd5) | [Github](https://github.com/multimodal-art-projection/MAP-NEO)
Neo is a completely open source large language model, including code, all model weights, datasets used for training, and training details.
## Model
| Model | Describe | Download |
|---|---|---|
neo_7b| This repository contains the base model of neo_7b | โข [๐ค Hugging Face](https://huggingface.co/m-a-p/neo_7b)
neo_7b_sft_v0.1| This repository contains the supervised fine-tuning version of the neo_7b model. | โข [๐ค Hugging Face](https://huggingface.co/m-a-p/neo_7b_sft_v0.1)
neo_7b_instruct_v0.1| This repository contains the instruction-tuned version of the neo_7b model. | โข [๐ค Hugging Face](https://huggingface.co/m-a-p/neo_7b_instruct_v0.1)
neo_7b_intermediate| This repo contains normal pre-training intermediate ckpts. A total of 3.7T tokens were learned at this phase. | โข [๐ค Hugging Face](https://huggingface.co/m-a-p/neo_7b_intermediate)
neo_7b_decay| This repo contains intermediate ckpts during the decay phase. A total of 720B tokens were learned at this phase. | โข [๐ค Hugging Face](https://huggingface.co/m-a-p/neo_7b_decay)
neo_scalinglaw_980M | This repo contains ckpts related to scalinglaw experiments | โข [๐ค Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_980M)
neo_scalinglaw_460M | This repo contains ckpts related to scalinglaw experiments | โข [๐ค Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_460M)
neo_scalinglaw_250M | This repo contains ckpts related to scalinglaw experiments | โข [๐ค Hugging Face](https://huggingface.co/m-a-p/neo_scalinglaw_250M)
neo_2b_general | This repo contains ckpts of 2b model trained using common domain knowledge | โข [๐ค Hugging Face](https://huggingface.co/m-a-p/neo_2b_general)
### Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = '<your-hf-model-path-with-tokenizer>'
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
input_text = "A long, long time ago,"
input_ids = tokenizer(input_text, add_generation_prompt=True, return_tensors='pt').to(model.device)
output_ids = model.generate(**input_ids, max_new_tokens=20)
response = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(response)
```
### Citation
```
@article{zhang2024mapneo,
title = {MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series},
author = {Ge Zhang and Scott Qu and Jiaheng Liu and Chenchen Zhang and Chenghua Lin and Chou Leuang Yu and Danny Pan and Esther Cheng and Jie Liu and Qunshu Lin and Raven Yuan and Tuney Zheng and Wei Pang and Xinrun Du and Yiming Liang and Yinghao Ma and Yizhi Li and Ziyang Ma and Bill Lin and Emmanouil Benetos and Huan Yang and Junting Zhou and Kaijing Ma and Minghao Liu and Morry Niu and Noah Wang and Quehry Que and Ruibo Liu and Sine Liu and Shawn Guo and Soren Gao and Wangchunshu Zhou and Xinyue Zhang and Yizhi Zhou and Yubo Wang and Yuelin Bai and Yuhan Zhang and Yuxiang Zhang and Zenith Wang and Zhenzhu Yang and Zijian Zhao and Jiajun Zhang and Wanli Ouyang and Wenhao Huang and Wenhu Chen},
year = {2024},
journal = {arXiv preprint arXiv: 2405.19327}
}
```
|
damgomz/fp_bs16_lr5_x4 | damgomz | "2024-05-25T16:08:29Z" | 1,074 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-05-23T09:53:24Z" | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-25T18:08:26'
project_name: fp_bs16_lr5_x4_emissions_tracker
run_id: 41c284d4-71f1-4b33-b649-65c136975a0a
duration: 199750.60248947144
emissions: 0.1306726388971817
emissions_rate: 6.541789474906303e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 7.5
cpu_energy: 2.3581629528201304
gpu_energy: 0
ram_energy: 0.4161441510031621
energy_consumed: 2.77430710382331
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 4
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 20
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 199750.60248947144 |
| Emissions (Co2eq in kg) | 0.1306726388971817 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 7.5 |
| CPU energy (kWh) | 2.3581629528201304 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.4161441510031621 |
| Consumed energy (kWh) | 2.77430710382331 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 4 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.3845199097922325 |
| Emissions (Co2eq in kg) | 0.07823565264170965 |
## Note
23 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | fp_bs16_lr5_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 5e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 40911 |
## Training and Testing steps
Epoch | Train Loss | Test Loss
---|---|---
| 0.0 | 16.038300 | 11.435867 |
| 0.5 | 4.387820 | 3.824438 |
| 1.0 | 3.580076 | 3.468719 |
| 1.5 | 3.337705 | 3.273031 |
| 2.0 | 3.178614 | 3.155824 |
| 2.5 | 3.066128 | 3.047407 |
| 3.0 | 2.977899 | 2.974327 |
| 3.5 | 2.898752 | 2.909501 |
| 4.0 | 2.834338 | 2.851398 |
| 4.5 | 2.787738 | 2.810708 |
| 5.0 | 2.723077 | 2.755004 |
| 5.5 | 2.680311 | 2.718634 |
| 6.0 | 2.655619 | 2.699163 |
|
Ransss/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-Q8_0-GGUF | Ransss | "2024-06-25T07:32:26Z" | 1,074 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"not-for-all-audiences",
"nsfw",
"rp",
"roleplay",
"role-play",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-06-25T07:31:46Z" | ---
base_model: Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B
language:
- en
library_name: transformers
license: llama3
tags:
- mergekit
- merge
- not-for-all-audiences
- nsfw
- rp
- roleplay
- role-play
- llama-cpp
- gguf-my-repo
---
# Ransss/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-Q8_0-GGUF
This model was converted to GGUF format from [`Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B`](https://huggingface.co/Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Casual-Autopsy/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Ransss/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-Q8_0-GGUF --hf-file l3-uncen-merger-omelette-rp-v0.1.1-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Ransss/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-Q8_0-GGUF --hf-file l3-uncen-merger-omelette-rp-v0.1.1-8b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Ransss/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-Q8_0-GGUF --hf-file l3-uncen-merger-omelette-rp-v0.1.1-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Ransss/L3-Uncen-Merger-Omelette-RP-v0.1.1-8B-Q8_0-GGUF --hf-file l3-uncen-merger-omelette-rp-v0.1.1-8b-q8_0.gguf -c 2048
```
|
kevinscaria/ate_tk-instruct-base-def-pos-neg-neut-combined | kevinscaria | "2023-02-23T06:12:10Z" | 1,073 | 1 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"NLP",
"dataset:Yaxin/SemEval2014Task4Raw",
"arxiv:2302.08624",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2023-02-23T05:18:24Z" | ---
license: mit
tags:
- NLP
datasets:
- Yaxin/SemEval2014Task4Raw
metrics:
- f1
- precision
- recall
pipeline_tag: text2text-generation
---
# ate_tk-instruct-base-def-pos-neg-neut-combined
This model is finetuned for the Aspect Term Extraction (ATE) subtask. The finetuning was carried out by adding prompts of the form:
- definition + 2 positive examples + 2 negative examples + 2 neutral examples
The prompt is prepended onto each input review. It is important to note that **this model output was finetuned on samples from both laptops and restaurants domains.**
The code for the official implementation of the paper [**InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis**](https://arxiv.org/abs/2302.08624) can be
found [here](https://github.com/kevinscaria/InstructABSA).
For the ATE subtask, this model is the current SOTA.
## Training data
InstructABSA models are trained on the benchmark dataset for Aspect Based Sentiment Analysis tasks viz. SemEval 2014. This [dataset](https://alt.qcri.org/semeval2014/task4/index.php?id=data-and-tools) consists of reviews
from laptops and restaurant domains and their corresponding aspect term and polarity labels.
### BibTeX entry and citation info
If you use this model in your work, please cite the following paper:
```bibtex
@inproceedings{Scaria2023InstructABSAIL,
title={InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis},
author={Kevin Scaria and Himanshu Gupta and Saurabh Arjun Sawant and Swaroop Mishra and Chitta Baral},
year={2023}
}
``` |
potsawee/longformer-large-4096-answerable-squad2 | potsawee | "2023-03-17T00:06:57Z" | 1,073 | 6 | transformers | [
"transformers",
"pytorch",
"longformer",
"text-classification",
"en",
"dataset:squad_v2",
"arxiv:2303.08896",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-classification | "2023-03-16T23:42:45Z" | ---
license: apache-2.0
datasets:
- squad_v2
language:
- en
library_name: transformers
pipeline_tag: text-classification
inference: false
---
# longformer-large-4096 fine-tuned to SQuAD2.0 for answerability score
This model determines whether the question is answerable (or unanswerable) given the context.
The output is a probability where values close to 0.0 indicate that the question is unanswerable and values close to 1.0 means answerable.
- Input: `question` and `context`
- Output: `probability` (i.e. logit -> sigmoid)
## Model Details
longformer-large-4096 model is fine-tuned to the SQuAD2.0 dataset where the input is a concatenation of ```question + context```.
Due to class imbalance in SQuAD2.0, we resample such that the model is trained on a 50/50 split between answerable and unanswerable samples in SQuAD2.0.
## How to Use the Model
Use the code below to get started with the model.
```python
>>> import torch
>>> from transformers import LongformerTokenizer, LongformerForSequenceClassification
>>> tokenizer = LongformerTokenizer.from_pretrained("potsawee/longformer-large-4096-answerable-squad2")
>>> model = LongformerForSequenceClassification.from_pretrained("potsawee/longformer-large-4096-answerable-squad2")
>>> context = """
British government ministers have been banned from using Chinese-owned social media app TikTok on their work phones and devices on security grounds.
The government fears sensitive data held on official phones could be accessed by the Chinese government.
Cabinet Minister Oliver Dowden said the ban was a "precautionary" move but would come into effect immediately.
""".replace("\n", " ").strip()
>>> question1 = "Which application have been banned by the British government?"
>>> input_text1 = question1 + ' ' + tokenizer.sep_token + ' ' + context
>>> inputs1 = tokenizer(input_text1, max_length=4096, truncation=True, return_tensors="pt")
>>> prob1 = torch.sigmoid(model(**inputs1).logits.squeeze(-1))
>>> print("P(answerable|question1, context) = {:.2f}%".format(prob1.item()*100))
P(answerable|question1, context) = 99.21% # highly answerable
>>> question2 = "Is Facebook popular among young students in America?"
>>> input_text2 = question2 + ' ' + tokenizer.sep_token + ' ' + context
>>> inputs2 = tokenizer(input_text2, max_length=4096, truncation=True, return_tensors="pt")
>>> prob2 = torch.sigmoid(model(**inputs2).logits.squeeze(-1))
>>> print("P(answerable|question2, context) = {:.2f}%".format(prob2.item()*100))
P(answerable|question2, context) = 2.53% # highly unanswerable
```
## Citation
```bibtex
@misc{manakul2023selfcheckgpt,
title={SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models},
author={Potsawee Manakul and Adian Liusie and Mark J. F. Gales},
year={2023},
eprint={2303.08896},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Aeala/Alpaca-elina-65b-4bit | Aeala | "2023-04-30T08:39:54Z" | 1,073 | 7 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-29T11:36:16Z" | ## Model Info
Merge of ChanSung's [Alpaca-LoRA-65B-elina](https://huggingface.co/LLMs/Alpaca-LoRA-65B-elina)
## Benchmarks
Coming soon... |
timm/caformer_m36.sail_in22k_ft_in1k | timm | "2023-05-05T05:46:43Z" | 1,073 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-22k",
"arxiv:2210.13452",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-05-05T05:45:47Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-22k
---
# Model card for caformer_m36.sail_in22k_ft_in1k
A CAFormer (a MetaFormer) image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 56.2
- GMACs: 13.3
- Activations (M): 50.5
- Image size: 224 x 224
- **Papers:**
- Metaformer baselines for vision: https://arxiv.org/abs/2210.13452
- **Original:** https://github.com/sail-sg/metaformer
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-22k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('caformer_m36.sail_in22k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'caformer_m36.sail_in22k_ft_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 96, 56, 56])
# torch.Size([1, 192, 28, 28])
# torch.Size([1, 384, 14, 14])
# torch.Size([1, 576, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'caformer_m36.sail_in22k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 576, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{yu2022metaformer_baselines,
title={Metaformer baselines for vision},
author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao},
journal={arXiv preprint arXiv:2210.13452},
year={2022}
}
```
|
Monero/WizardLM-OpenAssistant-30b-Native | Monero | "2023-05-26T03:20:53Z" | 1,073 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-25T22:48:48Z" | Entry not found |
nicholasKluge/Aira-2-124M | nicholasKluge | "2024-06-18T11:23:06Z" | 1,073 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"alignment",
"instruction tuned",
"text generation",
"conversation",
"assistant",
"en",
"dataset:nicholasKluge/instruct-aira-dataset",
"arxiv:1803.05457",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:apache-2.0",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-06-07T21:16:23Z" | ---
license: apache-2.0
datasets:
- nicholasKluge/instruct-aira-dataset
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
pipeline_tag: text-generation
widget:
- text: "<|startofinstruction|>Can you explain what is Machine Learning?<|endofinstruction|>"
example_title: Machine Learning
- text: "<|startofinstruction|>Do you know anything about virtue ethics?<|endofinstruction|>"
example_title: Ethics
- text: "<|startofinstruction|>How can I make my girlfriend happy?<|endofinstruction|>"
example_title: Advise
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.2
top_k: 30
top_p: 0.3
max_new_tokens: 200
length_penalty: 0.3
early_stopping: true
co2_eq_emissions:
emissions: 250
source: CodeCarbon
training_type: fine-tuning
geographical_location: United States of America
hardware_used: NVIDIA A100-SXM4-40GB
---
# Aira-2-124M
Aira-2 is the second version of the Aira instruction-tuned series. Aira-2-124M is an instruction-tuned model based on [GPT-2](https://huggingface.co/gpt2). The model was trained with a dataset composed of prompts and completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc).
Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo).
## Details
- **Size:** 124,441,344 parameters
- **Dataset:** [Instruct-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset)
- **Language:** English
- **Number of Epochs:** 5
- **Batch size:** 32
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Emissions:** 0.25 KgCO2 (Singapore)
- **Total Energy Consumption:** 0.52 kWh
This repository has the [source code](https://github.com/Nkluge-correa/Aira) used to train this model.
## Usage
Three special tokens are used to mark the user side of the interaction and the model's response:
`<|startofinstruction|>`What is a language model?`<|endofinstruction|>`A language model is a probability distribution over a vocabulary.`<|endofcompletion|>`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-2-124M')
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-2-124M')
aira.eval()
aira.to(device)
question = input("Enter your question: ")
inputs = tokenizer(tokenizer.bos_token + question + tokenizer.sep_token,
add_special_tokens=False,
return_tensors="pt").to(device)
responses = aira.generate(**inputs, num_return_sequences=2)
print(f"Question: ๐ค {question}\n")
for i, response in enumerate(responses):
print(f'Response {i+1}: ๐ค {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```
The model will output something like:
```markdown
>>>Question: ๐ค What is the capital of Brazil?
>>>Response 1: ๐ค The capital of Brazil is Brasรญlia.
>>>Response 2: ๐ค The capital of Brazil is Brasรญlia.
```
## Limitations
- **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
- **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
- **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
## Evaluation
|Model |Average |[ARC](https://arxiv.org/abs/1803.05457) |[TruthfulQA](https://arxiv.org/abs/2109.07958) |[ToxiGen](https://arxiv.org/abs/2203.09509) |
| ---------------------------------------------------------------------- | -------- | -------------------------------------- | --------------------------------------------- | ------------------------------------------ |
|[Aira-2-124M-DPO](https://huggingface.co/nicholasKluge/Aira-2-124M-DPO) |**40.68** |**24.66** |**42.61** |**54.79** |
|[Aira-2-124M](https://huggingface.co/nicholasKluge/Aira-2-124M) |38.07 |24.57 |41.02 |48.62 |
|GPT-2 |35.37 |21.84 |40.67 |43.62 |
|[Aira-2-355M](https://huggingface.co/nicholasKluge/Aira-2-355M) |**39.68** |**27.56** |38.53 |**53.19** |
|GPT-2-medium |36.43 |27.05 |**40.76** |41.49 |
|[Aira-2-774M](https://huggingface.co/nicholasKluge/Aira-2-774M) |**42.26** |**28.75** |**41.33** |**56.70** |
|GPT-2-large |35.16 |25.94 |38.71 |40.85 |
|[Aira-2-1B5](https://huggingface.co/nicholasKluge/Aira-2-1B5) |**42.22** |28.92 |**41.16** |**56.60** |
|GPT-2-xl |36.84 |**30.29** |38.54 |41.70 |
* Evaluations were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)).
## Cite as ๐ค
```latex
@misc{nicholas22aira,
doi = {10.5281/zenodo.6989727},
url = {https://github.com/Nkluge-correa/Aira},
author = {Nicholas Kluge Corrรชa},
title = {Aira},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
}
@phdthesis{kluge2024dynamic,
title={Dynamic Normativity},
author={Kluge Corr{\^e}a, Nicholas},
year={2024},
school={Universit{\"a}ts-und Landesbibliothek Bonn}
}
```
## License
Aira-2-124M is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
|
najonajo/lora-trained-xl-shohei | najonajo | "2023-08-01T07:56:53Z" | 1,073 | 1 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2023-08-01T05:19:18Z" |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks ssssshohei
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - najonajo/lora-trained-xl-shohei
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of sks ssssshohei using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
|
Nethermind/Mpt-Instruct-DotNet-S | Nethermind | "2023-08-22T21:47:45Z" | 1,073 | 1 | transformers | [
"transformers",
"pytorch",
"mpt",
"text-generation",
"csharp",
"instruct",
"7b",
"llm",
".net",
"custom_code",
"en",
"doi:10.57967/hf/1007",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-20T11:48:30Z" | ---
license: cc-by-sa-3.0
language:
- en
pipeline_tag: text-generation
tags:
- csharp
- mpt
- instruct
- 7b
- llm
- .net
---
## Try it
### C#
Code for [use form .Net CSharp on CPU](https://github.com/NethermindEth/Mpt-Instruct-DotNet-S) that runs on Windows, Mac M and Linux
### Python
```python
import torch
import transformers
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
tokenizer.pad_token = tokenizer.eos_token
device = torch.device("cuda")
model_name = "Nethermind/Mpt-Instruct-DotNet-S"
config = transformers.AutoConfig.from_pretrained(model_name, trust_remote_code=True)
config.init_device = device
config.max_seq_len = 1024
config.attn_config['attn_impl'] = 'torch'
config.use_cache = False
model = transformers.AutoModelForCausalLM.from_pretrained(
model_name,
config=config,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
ignore_mismatched_sizes=True,
# load_in_8bit=True # when low on GPU memory
)
model.eval()
INSTRUCTION_KEY = "### Instruction:"
RESPONSE_KEY = "### Response:"
PROMPT_FOR_GENERATION_FORMAT = """{system}
{instruction_key}
{instruction}
{response_key}
""".format(
system="{system}",
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY
)
def give_answer(instruction="Create a loop over [0, 6, 7 , 77] that prints its contentrs", system="You are an experienced .Net C# developer. Below is an instruction that describes a task. Write a response that completes the request providing detailed explanations with code examples.", ):
question = PROMPT_FOR_GENERATION_FORMAT.format(system=system, instruction=instruction)
input_tokens = tokenizer.encode(question ,return_tensors='pt')
model.generate(input_tokens.to(device), max_new_tokens=min(512, 1024 - input_tokens.shape[1]), do_sample=False, top_k=1, top_p=0.95)
outputs = output_loop(tokenized_question)
answer = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(answer[0])
```
## Training
Finetuned for CSharp [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct). Max context length is restricted to 1024 tokens.
- 'Loss': 0.256045166015625 on 300k CSharp-related records
- 'Loss': 0.095714599609375 on 50k specific short prompts
## Sources
data contained (most data was around 500 tokens long < 1000, except large code files):
- codeparrot/github-code C# ("mit", "Apache-2.0", "Bsd-3-clause", "Bsd-2-clause", "Cc0-1.0", "Unlicense", "isc")
- raw data Plain .cs files randomly cut at the 60-80% in the instruction, and we ask the network to continue last 40-20% (76k)
- documented static functions 72k
- SO 5q_5answer + 5q_5best (CC BY-SA 4.0) 70k
- Dotnet wiki (30k, rendered out from [github repo](https://github.com/microsoft/dotnet), see also removed, GPT-4 generated short question to each file)
- All NM Static Functions and Tests (from [nethermind client repo](https://github.com/NethermindEth/nethermind) documented and described via GPT-4 (4k)
- GPT-4 questions, GPT-3.5 answers for CSharp: Short Q->Code, Explain Code X > Step-By-Step (35k)
- GPT-4 questions, GPT-3.5 answers for nethermind client interface `IEthRpcModule `: Short Q->Code, Explain Code X -> Step-By-Step (7k)
## Contents
- HF compatible model
- GGML compatible quantisations (f16, q8, q5) |
YeungNLP/firefly-llama2-7b-chat | YeungNLP | "2023-10-21T07:45:45Z" | 1,073 | 5 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-16T05:39:23Z" | # Firefly-LLaMA2-Chinese: ๅผๆบไธญๆLLaMA2ๅคงๆจกๅ
<img src="pics/firefly_logo.png" width="250">
ๆฌข่ฟๅ ๅ
ฅFireflyๅคงๆจกๅๆๆฏไบคๆต็พค๏ผๅ
ณๆณจๆไปฌ็ๅ
ฌไผๅทใ
<img src="pics/gongzhonghao.png" width="300">
## ็ฎๅฝ
+ [้กน็ฎ็ฎไป](#้กน็ฎ็ฎไป)
+ [ๆจกๅๅ่กจ & ๆฐๆฎๅ่กจ](#ๆจกๅไธๆฐๆฎ)
+ [ๆจกๅ่ฏๆต](#ๆจกๅ่ฏๆต)
+ [่ฎญ็ป็ป่](#่ฎญ็ป็ป่)
+ [็ๆๆๆ](#็ๆๆๆ)
+ [ๅฑ้ๆง](#ๅฑ้ๆง)
## ้กน็ฎ็ฎไป
ๆๆฏๆ็ซ ๏ผ[QLoRAๅข้้ข่ฎญ็ปไธๆไปคๅพฎ่ฐ๏ผๅๆฑๅLlama2็ๅฎ่ทต](https://mp.weixin.qq.com/s/26-Qxma9M2wGoTQgOlKRmQ)
ๆฌ้กน็ฎไธ[Firefly](https://github.com/yangjianxin1/Firefly)ไธ่็ธๆฟ๏ผไธๆณจไบ**ไฝ่ตๆบๅข้้ข่ฎญ็ป**๏ผๆขๆฏๆๅฏนBaichuan2ใQwenใInternLM็ญๅ็ไธญๆๆจกๅ่ฟ่กๅข้้ข่ฎญ็ป๏ผไนๅฏๅฏนLLaMA2ใFalcon็ญ่ฑๆๆจกๅ่ฟ่กไธญๆ่ฏ่กจๆฉๅ
๏ผ็ถๅ่ฟ่กๅข้้ข่ฎญ็ปใ
ๆไปฌๅผๆบไบFirefly-LLaMA2-Chineseๆจกๅ๏ผ่ฟๆฏไธญ่ฑๅ่ฏญ็ณปๅๆจกๅใๆไปฌไปฅLLaMA2๐ฆไธบๅบๅบงๆจกๅ๏ผๅฏนLLaMA2่ฟ่กไธญๆ่ฏ่กจๆฉๅ
๏ผไฝฟ็จ22GBไธญ่ฑๆ้ข่ฎญ็ป่ฏญๆๅฏนๅ
ถ่ฟ่กๅข้้ข่ฎญ็ปใ
ๆๅไฝฟ็จๅคง่งๆจกไธญ่ฑๆๅค่ฝฎๅฏน่ฏๆไปคๅฏนๆจกๅ่ฟ่ก่ฎญ็ปใๆไปฌๅฏนๆจกๅ่ฟ่กไบๆฆๅ่ฏๆตๅไบบๅทฅ่ฏๆต๏ผไธ็ฐๆ็ๅผๆบๅทฅไฝ็ธๆฏ๏ผๅ
ทๆไธ้็็ซไบๅใ
ๅจOpen LLM LeaderboardๅCMMLUไธ๏ผๆไปฌ็ๆจกๅ่ถ
่ถไบLinlyใYayiใFlagAlpha็ญๆจกๅ๏ผ
ๅจOpen LLM Leaderboardไธ่ถ
่ถZiya๏ผๅจCMMLUไธๆฏZiya็ฅไฝ0.43ๅใๅจไบบๅทฅๆต่ฏไธญ๏ผๆไปฌ็ๆจกๅไปฅ**33.08%่ท่**ใ60.77%ๅนณๅฑใ6.15%ๅคฑ่ดฅ็ๆ็ปฉ๏ผ่ถ
่ถLinlyใ
ๆไปฌ่ฟๅผๆบไบfirelfy-baichuan2-13bๆจกๅ๏ผๅจOpenCompass็CMMLUๆฆๅไธไปฅ56.83็ๅๆฐ๏ผ**ไฝๅ็ฌฌ8**๏ผๆฏ็พๅทๅฎๆนๆจกๅ็ฅไฝ1.57ๅใ
**ๆด้่ฆ็ๆฏ๏ผๅจๆดไธชๅข้้ข่ฎญ็ปๅๆไปคๅพฎ่ฐ้ถๆฎต๏ผๆไปฌๆๅคไป
ไฝฟ็จไบ4\*V100็GPU๏ผ่ฎญ็ปๆดๅ ไฝ่ตๆบ้ซๆใ็ธ่พไบZiya็160\*A100๏ผLinly็32\*A100๏ผChinese-LLaMA-Alpaca็48\*A40๏ผๆไปฌๆไฝฟ็จ็่ฎญ็ป่ตๆบๅฐๅพๅคใ**
ๆไบบไปฅ้ฑผ๐๏ผไธๅฆๆไบบไปฅๆธ๐ฃ๏ผๆไปฌไธไป
ๅผๆบไบๆจกๅๆ้๏ผไนๅผๆบไบ้กน็ฎๅ
จๆต็จ็่ฎญ็ปไปฃ็ ใ่ฎญ็ปๆฐๆฎ๏ผไปฅๅ่ฎญ็ป็ป่ใ
ไธป่ฆๅทฅไฝ๏ผ
- ๐ ๅฏนLLaMA2่ฟ่กไธญๆ่ฏ่กจๆฉๅ
๏ผๆ้ซ็ผ่งฃ็ ๆ็ใไธๅๅงLLaMA2็ธๅฏน๏ผไธญๆๅบๅ้ฟๅบฆๅๅฐ็บฆ54.11%๏ผๅ็ธๆๅไบๆจกๅๅจไธญๆๅ็ๆๅคง้ฟๅบฆใ
- ๐ ไฝฟ็จๅคง่งๆจกไธญ่ฑๆ่ฏญๆ่ฟ่กๅข้้ข่ฎญ็ป๏ผ็ถๅ่ฟ่กๅค่ฝฎๆไปคๅพฎ่ฐใๅผๆบ7Bๅ13B็BaseๅChat็ๆจกๅๆ้ใ
- ๐ ๆถ้ใๆด็ๅนถๅผๆบ่ฎญ็ปๆฐๆฎ๏ผๅ
ๆฌ22GBไธญ่ฑๆ้ข่ฎญ็ป่ฏญๆ๏ผไปฅๅๅค่ฝฎๆไปคๆฐๆฎใ
- ๐ ๅผๆบๅข้้ข่ฎญ็ปใๆไปคๅพฎ่ฐ็ญๅ
จๆต็จไปฃ็ ใๆฏๆๅจไธปๆต็ๅผๆบๆจกๅไธ่ฟ่กๅข้้ข่ฎญ็ปๅๆไปคๅพฎ่ฐ๏ผๅฆBaichuan2ใBaichuanใQwenใInternLMใLLaMA2ใLLaMAใFalcon็ญใ
- ๐ ๅฏนๆจกๅ่ฟ่กๅผๆบๆฆๅ่ฏๆตๅไบบๅทฅ่ฏๆตใๆๅปบไบบๅทฅ่ฏๆต้๏ผๅ
ๅซ13็ง่ฏๆตไปปๅก๏ผๅฏนๆจกๅ่ฟ่กไบบๅทฅ่ฏๆตใ
## ๆจกๅๅ่กจ & ๆฐๆฎๅ่กจ
ๆไปฌๅผๆบไบ7Bๅ13B็BaseไธChatๆจกๅใBaseๆจกๅๆฏๅบไบLLaMA2ๆฉๅ
ไธญๆ่ฏ่กจๅๅข้้ข่ฎญ็ปๅพๅฐ็ๆจกๅ๏ผChatๆจกๅๆฏๅจBaseๆจกๅ็ๅบ็กไธ่ฟ่กๅค่ฝฎๅฏน่ฏๆไปคๅพฎ่ฐใ
ไธบไบๆข็ฉถๅบๅบงๆจกๅๅฏนๆไปคๅพฎ่ฐ็ๅฝฑๅ๏ผๆไปฌไนๅพฎ่ฐไบbaichuan2-baseๆจกๅ๏ผ่ทๅพfirefly-baichuan2-13b๏ผๅ
ทๆไธ้็ๆๆใๆดๅคไธญๆๅพฎ่ฐ๏ผๅฏๆฅ็[Firefly้กน็ฎ](https://github.com/yangjianxin1/Firefly)ใ
| ๆจกๅ | ็ฑปๅ | ่ฎญ็ปไปปๅก | ่ฎญ็ป้ฟๅบฆ |
|-----------------------------------------------------------------------------------------------|------|--------|------|
| ๐ค[Firefly-LLaMA2-7B-Base](https://huggingface.co/YeungNLP/firefly-llama2-7b-base) | ๅบๅบงๆจกๅ | CLM | 1024 |
| ๐ค[Firefly-LLaMA2-13B-Base](https://huggingface.co/YeungNLP/firefly-llama2-13b-base) | ๅบๅบงๆจกๅ | CLM | 1024 |
| ๐ค[Firefly-LLaMA2-7B-Chat](https://huggingface.co/YeungNLP/firefly-llama2-7b-chat) | ๆไปคๆจกๅ | ๅค่ฝฎๆไปคๅพฎ่ฐ | 1024 |
| ๐ค[Firefly-LLaMA2-13B-Chat](https://huggingface.co/YeungNLP/firefly-llama2-13b-chat) | ๆไปคๆจกๅ | ๅค่ฝฎๆไปคๅพฎ่ฐ | 1024 |
| ๐ค[Firefly-Baichuan2-13B](https://huggingface.co/YeungNLP/firefly-baichuan2-13b) | ๆไปคๆจกๅ | ๅค่ฝฎๆไปคๅพฎ่ฐ | 1024 |
| ๐ค[Firefly-LLaMA2-7B-Chat-QLoRA](https://huggingface.co/YeungNLP/firefly-llama2-7b-chat-qlora) | ๆไปคๆจกๅ | ๅค่ฝฎๆไปคๅพฎ่ฐ | 1024 |
| ๐ค[Firefly-LLaMA2-13B-Chat-QLoRA](https://huggingface.co/YeungNLP/firefly-llama2-13b-chat-qlora) | ๆไปคๆจกๅ | ๅค่ฝฎๆไปคๅพฎ่ฐ | 1024 |
ๆฌ้กน็ฎไฝฟ็จ็ๆฐๆฎๅฆไธ่กจ๏ผๅ
ถไธญfirefly-pretrain-datasetๆฏๆไปฌๅข้้ข่ฎญ็ป้ถๆฎตๆไฝฟ็จ็ๆฐๆฎ๏ผ
| ๆฐๆฎ้ | ไป็ป |
|----------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|
| [firefly-pretrain-dataset](https://huggingface.co/datasets/YeungNLP/firefly-pretrain-dataset) | Firefly้กน็ฎๆด็ๅไฝฟ็จ็22GB้ข่ฎญ็ปๆฐๆฎ๏ผไธป่ฆๅ
ๅซCLUEใThucNewsใCNewsใCOIGใ็ปดๅบ็พ็ง็ญๅผๆบๆฐๆฎ้๏ผไปฅๅๆไปฌๆถ้็ๅค่ฏ่ฏใๆฃๆใๆ่จๆ็ญใ |
| [moss-003-sft-data](https://huggingface.co/datasets/YeungNLP/moss-003-sft-data) | ็ฑๅคๆฆๅคงๅญฆMOSSๅข้ๅผๆบ็ไธญ่ฑๆๅค่ฝฎๅฏน่ฏๆฐๆฎ๏ผๅ
ๅซ100ไธ+ๆฐๆฎ |
| [ultrachat](https://huggingface.co/datasets/YeungNLP/ultrachat) | ็ฑๆธ
ๅๅคงๅญฆๅผๆบ็่ฑๆๅค่ฝฎๅฏน่ฏๆฐๆฎ๏ผๅ
ๅซ140ไธ+ๆฐๆฎ |
| [school_math_0.25M](https://huggingface.co/datasets/YeungNLP/school_math_0.25M) | ็ฑBELLE้กน็ฎ็ปๅผๆบ็ๆฐๅญฆ่ฟ็ฎๆไปคๆฐๆฎ๏ผๅ
ๅซ25ไธๆกๆฐๆฎใ |
## ๆจกๅ่ฏๆต
ๆไปฌๅจCMMLUๅOpen LLM Leaderboardไธๅๅซๅฏนๆจกๅ็ไธญๆๅ่ฑๆ่ฝๅ่ฟ่กไบๅฎข่ง่ฏๆต๏ผๅนถไธๅจๆไปฌๆๅปบ็ไบบๅทฅ่ฏๆต้ไธ่ฟ่กไบไบบๅทฅ่ฏๆตใ
**Open LLM LeaderboardๅCMMLUๆฆๅๅพๅไบ่ฏๆตๅคงๆจกๅ็ๅ้ข่ฝๅ๏ผไธๅคๅ
จ้ข๏ผๆไปฅๆไปฌ่ฟไธๆญฅ่ฟ่กไบไบบๅทฅ่ฏๆตใ**
### Open LLM Leaderboard
| ๆจกๅ | Average | ARC | HellaSwag | MMLU | TruthfulQA |
|-----------------------------|-----------|-------|-----------|-------|------------|
| chinese-alpaca-2-13b | 60.94 | 58.7 | 79.74 | 55.1 | 50.22 |
| openbuddy-llama2-13b-v8.1 | 60.47 | 55.97 | 79.79 | 54.95 | 51.16 |
| flagalpha-llama2-13b-chat | 60.41 | 55.97 | 82.05 | 54.74 | 48.9 |
| llama-2-13b-chat | 59.93 | 59.04 | 81.94 | 54.64 | 44.12 |
| vicuna-13b-v1.1 | 59.22 | 52.73 | 80.13 | 51.94 | 52.08 |
| guanaco-13b | 59.18 | 57.85 | 83.84 | 48.28 | 46.73 |
| **firefly-llama2-13b-chat** | **59.05** | 57.51 | 77.94 | 52.56 | 48.18 |
| llama-2-7b-chat | 56.34 | 52.9 | 78.55 | 48.32 | 45.57 |
| flagalpha-llama2-7b-chat | 56.13 | 52.39 | 77.52 | 47.72 | 46.87 |
| yayi-7b-llama2 | 54.45 | 55.03 | 77.84 | 40.92 | 44.02 |
| chinese-alpaca-2-7b | 54.33 | 49.57 | 72.62 | 46.5 | 48.63 |
| **firefly-llama2-7b-chat** | **54.19** | 51.19 | 73.32 | 45.47 | 46.78 |
| yayi-13b-llama2 | 51.06 | 48.55 | 74.82 | 38.68 | 42.19 |
| linly-llama2-7b | 49.06 | 48.04 | 73.25 | 35.04 | 39.92 |
| linly-llama2-13b | 38.22 | 33.62 | 39.59 | 33.97 | 45.71 |
| ziya-llama-13b* | - | - | 76.9 | 50.3 | - |
*่กจ็คบๅๆฐๆฅๆบไบOpenCompassๅฎๆน๏ผ่้Open LLM Leaderboardๅฎๆนๆฐๆฎ
Conclusion๏ผๆไปฌ็ๆจกๅไฟ็ไบllama2ๆจกๅไผ็ง็่ฑๆ่ฝๅ๏ผๅจOpen LLM Leaderboardไธ๏ผไธllama2-chatใvicuna-v1.1ใguanaco็ญๆจกๅ็่กจ็ฐๅๅ
ถๆฅ่ฟใ
### CMMLUๆฆๅ
| ๆจกๅ | CMMLU | ่ฎญ็ป็ป่ |
|-----------------------------|-----------|------------------------|
| **firefly-baichuan2-13b** | **56.83** | 4\*V100๏ผQLoRA๏ผๆไปคๅพฎ่ฐ |
| chinese-alpaca-2-13b | 45.17 | 48\*A40๏ผLoRA๏ผ่ฏ่กจๆฉๅ
+ ๅข้้ข่ฎญ็ป + ๆไปคๅพฎ่ฐ |
| openbuddy-llama2-13b-v8.1 | 41.66 | ๅ
จ้ๅๆฐ่ฎญ็ป๏ผ่ฏ่กจๆฉๅ
+ ๆไปคๅพฎ่ฐ |
| chinese-alpaca-2-7b | 40.86 | 48\*A40๏ผLoRA๏ผ่ฏ่กจๆฉๅ
+ ๅข้้ข่ฎญ็ป + ๆไปคๅพฎ่ฐ |
| ziya-llama-13b* | 39.9 | 160\*A100๏ผๅ
จ้ๅๆฐ่ฎญ็ป๏ผ่ฏ่กจๆฉๅ
+ ๅข้้ข่ฎญ็ป + ๆไปคๅพฎ่ฐ + RLHF |
| chinese-alpaca-plus-13b* | 39.9 | 48\*A40๏ผLoRA๏ผ่ฏ่กจๆฉๅ
+ ๅข้้ข่ฎญ็ป + ๆไปคๅพฎ่ฐ |
| **firefly-llama2-13b-chat** | **39.47** | 4\*V100๏ผQLoRA๏ผ่ฏ่กจๆฉๅ
+ ๅข้้ข่ฎญ็ป + ๆไปคๅพฎ่ฐ |
| flagalpha-llama2-13b-chat | 39.20 | LoRA๏ผๆไปคๅพฎ่ฐ |
| llama-2-13b-chat | 38.65 | ๅ
จ้ๅๆฐ่ฎญ็ป๏ผ้ข่ฎญ็ป + ๆไปคๅพฎ่ฐ + RLHF(ๅ
จๆต็จไธบ่ฑๆ) |
| **firefly-llama2-7b-chat** | **
34.03** | 4\*V100๏ผQLoRA๏ผ่ฏ่กจๆฉๅ
+ ๅข้้ข่ฎญ็ป + ๆไปคๅพฎ่ฐ |
| llama-2-7b-chat | 33.76 | ๅ
จ้ๅๆฐ่ฎญ็ป๏ผ้ข่ฎญ็ป + ๆไปคๅพฎ่ฐ + RLHF(ๅ
จๆต็จไธบ่ฑๆ) |
| flagalpha-llama2-7b-chat | 32.61 | LoRA๏ผๆไปคๅพฎ่ฐ |
| chinese-alpaca-plus-7b* | 32.6 | 48\*A40๏ผLoRA๏ผ่ฏ่กจๆฉๅ
+ ๅข้้ข่ฎญ็ป + ๆไปคๅพฎ่ฐ |
| yayi-13b-llama2 | 30.73 | ๆไปคๅพฎ่ฐ |
| yayi-7b-llama2 | 30.47 | ๆไปคๅพฎ่ฐ |
| linly-llama2-7b | 28.68 | 32\*A100๏ผๅ
จ้ๅๆฐ่ฎญ็ป๏ผ่ฏ่กจๆฉๅ
+ ๆททๅ่ฎญ็ป |
| linly-llama2-13b | 26.32 | 32\*A100๏ผๅ
จ้ๅๆฐ่ฎญ็ป๏ผ่ฏ่กจๆฉๅ
+ ๆททๅ่ฎญ็ป |
ๆไปฌ็ปไธ้็จOpenCompassๅทฅๅ
ทๆฅ็ฆป็บฟ่ฏๆตCMMLU๏ผๅ
ถไธญ*่กจ็คบ็ปๆๆฅๆบไบOpenCompassๅฎๆนๆฆๅๆ่
็ฑๆจกๅไฝ่
่ชๆต็ๅๆฐใ
Conclusions:
- ไธllama-2-chat็ธๆฏ๏ผๆไปฌ็ๆจกๅๅจไธญๆๆน้ข็่ฝๅๅ
ทๆไธๅฎ็ๆๅใ
- ๅฏนไบไธญๆ่ฏ่กจๆฉๅ
ๆจกๅ่่จ๏ผๆไปฌ็ๆจกๅๅคงๅน
้ขๅ
ๅ
จ้่ฎญ็ป็linly๏ผไธๅ
จ้่ฎญ็ป็ziyaใchinese-alpaca-1ๅๅ
ถๆฅ่ฟใ
- firefly-baichuan2-13bไธ้ช็ปๅฐ๏ผๅนถไธๅจOpenCompass็CMMLUๆฆๅ๏ผ่ฏฅๅๆฐๅฏๆ็ฌฌ8๏ผๅฐๅน
่ฝๅไบ็พๅทๅฎๆนๆจกๅ๏ผ่ฟไธๆญฅ้ช่ฏไบๅบๅบงๆจกๅ็้่ฆๆงใ
- ๆไปฌ็ๆจกๅๅจCMMLUไธ็ๆๆ ไธchinese-alpaca-2ไนๅญๅจไธๅฎ็ๅทฎ่ทใ่ฟไธ็ฐ่ฑกๅพๅคง็จๅบฆไธๅข้้ข่ฎญ็ปๆฐๆฎ้ๅๆฐๆฎๅๅธ็ธๅ
ณ๏ผๆไปฌ็ๅข้้ข่ฎญ็ปๆฐๆฎไป
ไธบ22GB(ๆชๅ
ๅไฝฟ็จ๏ผ่ฏฆๆ
่ง่ฎญ็ป็ป่)๏ผๅข้้ข่ฎญ็ปไธๅคๅ
ๅ๏ผไธๅคง้จๅไธบๆฐ้ป่ฏญๆ๏ผๅฏนไบCMMLU่ฝๅ็ๆๅๆ้ใ
### ไบบๅทฅ่ฏๆต
ๆไปฌๆๅปบไบ่ฏๆต้๏ผๅ
ถไธญๅ
ๅซ13็ง่ฏๆตไปปๅก๏ผ่ฏๆตๆฐๆฎ่ฏฆ่งdata/firefly-eval.xlsxใๅคง้จๅๆฐๆฎไป[Belleๆฐๆฎ](https://huggingface.co/datasets/BELLE-2/train_3.5M_CN_With_Category)ไธญ่ฟ่ก้ๆ ทๅไผๅใ
ๆฏ็งไปปๅกๅ
ๅซ10ๆกๆฐๆฎ๏ผไธๅ
ฑ130ๆกๆฐๆฎใ13็งไปปๅกๅ
ๅซ๏ผๅคด่้ฃๆดใๅ็ฑปใClose QAใไปฃ็ ็ๆใ
ไฟกๆฏๆฝๅใๅผๆพๅผ็ๆใๆๅฎณๆงๆฃ้ชใๆฐๅญฆ้ขใ้
่ฏป็่งฃใOpen QAใRewriteใSummarizationใ็ฟป่ฏใ
่ฏๆตๆ ๅๅฆไธ๏ผ
- ๅฏนไบๅไธ้้ข็ฎ๏ผๅฏนไธคไธคๆจกๅ็็ๆ็ปๆ่ฟ่กๆฏ่พ๏ผๅญๅจ่่ดๅนณไธ็งๅ
ณ็ณปใ
- ๅฏนไบๅฎข่ง้ข๏ผๅฆๆไธคไธชๆจกๅๅๅ็ญๆญฃ็กฎ๏ผๆๅๅ็ญ้่ฏฏ๏ผๅไธบๅนณๅฑใ
- ๅฏนไบไธป่ง้ข๏ผๅ็ญๆดๅ ่ฏฆ็ปใ็ๅฎใ็ป่ๆดไธฐๅฏ๏ผๅไธบ่ท่ใๅฝไธค่
ๅ
ๅฎนๆญฃ็กฎ๏ผๅนถไธ่ฏฆ็ป็จๅบฆ้ๅธธๆฅ่ฟๆถ๏ผๆ่
ๅๆๅ็งๆถ๏ผๅฏ่งไธบๅนณๅฑใ
- ๅฏนไบไธญๆ้ข็ฎ๏ผๅฆๆ็ฎๆ ๅๅคไธบไธญๆ๏ผไฝๆจกๅๅดๅๅค่ฑๆ๏ผๅๅคไธบ้่ฏฏใ
่ฏฆ็ป็่ฏๆต็ปๆๅฏๅ่๏ผ[ไบบๅทฅ่ฏๆต็ปๆ](https://docs.qq.com/sheet/DU1h1bE1xWG50ZVlI?tab=BB08J2)ใๅจ่ฏๆตไธญ๏ผๆไปฌ้ตๅฎ่ฎพๅฎ็่ฏๆตๆ ๅ๏ผไฝไพๆง้พไปฅๅฎๅ
จ้ฟๅ
ไธป่งๅ ็ด ็ๅฝฑๅ๏ผ
ๆฌ็ๅ
ฌๅผ้ๆ็ๅๅ๏ผๆไปฌๅ
ฌๅผไบ่ฏๆต็ป่๏ผๅคงๅฎถๅฏๆฏ่พๆจกๅๆๆใ
ๅไธบๅบไบLLaMA2่ฟ่กๆฑๅ็ๆจกๅ๏ผๆไปฌๅฏนFirefly-LLaMA2-13B-ChatๅLinly-LLaMA2-13B่ฟ่กไบไบบๅทฅๆต่ฏ๏ผไป่ฏๆต็ปๆๆฅ็๏ผๆไปฌ็ๆจกๅๅญๅจ้ๅธธๅคง็ไผๅฟใ
ๅนถไธๆไปฌไธLlama2-Chat-13Bไน่ฟ่กไบไบบๅทฅ่ฏๆต๏ผไนๅญๅจ้ๅธธๅคง็ไผๅฟใ
| ๆจกๅ | ่ท่ | ๅนณๅฑ | ๅคฑ่ดฅ |
|---------------------------------------------|------|------------|----------|
| Firefly-LLaMA2-13B-Chat VS Linly-LLaMA2-13B | **43(33.08%)** | 79(60.77%) | 8(6.15%) |
| Firefly-LLaMA2-13B-Chat VS Llama2-Chat-13B | **86(66.15%)** | 40(30.77%) | 4(3.08%) |
## ่ฎญ็ป็ป่
ๆไปฌ็่ฎญ็ปๆต็จๅจQLoRAไธ่ฟ่กไผๅ๏ผๆต็จๅคง่ดๅฆไธ๏ผ
- ๅฏนLLaMA2่ฟ่กไธญๆ่ฏ่กจๆฉๅ
๏ผๆ้ซๆจกๅๅจไธญๆไธ็็ผ่งฃ็ ๆ็ใๆไปฌไฝฟ็จไบ[Chinese-LLaMA-Alpaca-2้กน็ฎ](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2)ๆฉๅ
ๅ็่ฏ่กจใ
- ไฝฟ็จ22GBไธญ่ฑๆ่ฏญๆ๏ผๅฏนๆฉๅ
่ฏ่กจๅ็ๆจกๅ่ฟ่กๅข้้ข่ฎญ็ป๏ผ้็จ่ชๅๅฝไปปๅกใ
- ไฝฟ็จไธค็พๅคไธๆกไธญ่ฑๆๅค่ฝฎๅฏน่ฏๆไปคๆฐๆฎ๏ผๅฏนๅข้้ข่ฎญ็ปๆจกๅ่ฟ่กๆไปคๅพฎ่ฐใ
ๆไปฌๅฏนLLaMA2็่ฏ่กจ่ฟ่กๆฉๅ
๏ผๅ ๅ
ฅไบๅธธ่ง็ไธญๆtoken๏ผๆ้ซๆจกๅๅฏนไธญๆ็็ผ่งฃ็ ๆ็ใๆไปฌๅจCNewsๆฐๆฎ้ไธๅฏนๆฐ็tokenizer่ฟ่กไบๆต่ฏ๏ผ็ป่ฟ่ฏ่กจๆฉๅ
ๅ๏ผtokenๆฐ้็ฑ2.98ไบฟๅๅฐไธบ1.37ไบฟ๏ผ
้ฟๅบฆๅๅฐ็บฆ54.11%ใๅฏนไบไธญๆไปปๅก๏ผไธไป
ๆๅคงๅฐๆ้ซไบๆจกๅ็่ฎญ็ปๅๆจ็ๆ็๏ผๅนถไธๅ็ธๅฐๆ้ซไบๆจกๅ็ๆๅคง้ฟๅบฆใ
<img src="pics/token-number.png" width="450">
ๆไปฌๅฐๅข้้ข่ฎญ็ปๆฐๆฎ้ๅฝๅไธบfirefly-pretrain-dataset๏ผไธป่ฆๅ
ๅซCLUEใThucNewsใCNewsใCOIGใ็ปดๅบ็พ็ง็ญๅผๆบๆฐๆฎ้๏ผไปฅๅๆไปฌๆถ้็ๅค่ฏ่ฏใๆฃๆใๆ่จๆ็ญ๏ผๆฐๆฎๅๅธๅฆไธๅพใ็ฑไบ่ฎญ็ป่ตๆบ็ญๅๅ ๏ผๅจๅข้้ข่ฎญ็ป้ถๆฎต๏ผๆไปฌๅนถๆชๅ
ๅๅฉ็จๅ
จ้จๆฐๆฎ๏ผไป
ๆถ่ไบๅคง็บฆ2B็tokenใ
<img src="pics/pretrain-data.png" width="450">
ๆไปคๅพฎ่ฐ็ๆฐๆฎไธป่ฆๅ
ๆฌUltraChatใMossใschool math็ญๆฐๆฎ๏ผๅฏน่ฟไบๆฐๆฎ่ฟ่กๆธ
ๆดใ่ฟๆปคใ้ๆ ทใๅๅนถ็ญๆไฝ๏ผๆ็ป่ทๅพไธค็พๅคไธๆกๆฐๆฎ๏ผๅๅงๆฐๆฎ่ฏฆ่ง[Firefly้กน็ฎ](https://github.com/yangjianxin1/Firefly)ใ
ๅจๆดไธช่ฎญ็ปๆต็จไธญ๏ผๆไปฌๆๅคไป
ไฝฟ็จไบ4*V100 GPU๏ผไธคไธช้ถๆฎต็่ฎญ็ป้ฟๅบฆๅไธบ1024๏ผLoRA rank=64, LoRA alpha=16ใๅจ้ข่ฎญ็ปไธๆไปคๅพฎ่ฐ้ถๆฎต๏ผword embeddingไธlm_head็ๆ้ๅๅไธ่ฎญ็ปใ
7Bไธ13Bๆจกๅ๏ผๆ็ปๅไธ่ฎญ็ป็ๅๆฐ้ๅๅซ็บฆไธบ612.9Mๅ816.6Mใ
ๆไปคๅพฎ่ฐ้ถๆฎตไฝฟ็จ[Firefly้กน็ฎ](https://github.com/yangjianxin1/Firefly)็่ฎญ็ปไปฃ็ ใ
Firefly-LLaMA2-Chatๆจกๅ็่ฎญ็ปlossๆฒ็บฟๅฆไธๅพๆ็คบ๏ผ่ฎญ็ปlossๅ
ทๆ่ฏๅฅฝ็ๆถๆๆงใ7bๅ13bๆจกๅๅๅซๅฏนๅบ็ฒ่ฒๅๆฉ่ฒๆฒ็บฟ๏ผ
<img src="pics/finetune-loss.png" width="450">
## ็ๆๆๆ
ไธ้ขๅไธบFirefly-LLaMA2-13B-Chat็ๆ็็ปๆ๏ผๆช็ปไบบไธบไฟฎๆนใๆดๅค็ๆ็คบไพ๏ผ่ฏทๅ่ๆๆกฃ๏ผ[ไบบๅทฅ่ฏๆต็ปๆ](https://docs.qq.com/sheet/DU1h1bE1xWG50ZVlI?tab=BB08J2)
<details><summary><b>โ๏ธ ไฟกๆฏๆฝๅ</b></summary>
````text
Instruction๏ผ
่ฟๆฏไธไธช่กจๆ ผ๏ผ
| ่กๆ
ย ย ย ย ย ย | ไปทๆ ผย | ๆถจ่ทๅน
| ๆไบค้ย | ๅผ็ไปท | ๆ้ซไปท | ๆไฝไปท |
|------------|-------|--------|---------|--------|--------|--------|
| ่ก็ฅจAย ย ย ย ย | 10.2ย | +2.3%ย | 56,789ย | 9.8ย ย ย | 10.5ย ย | 9.7ย ย ย |
| ่ก็ฅจBย ย ย ย ย | 23.5ย | -1.5%ย | 45,678ย | 24.0ย ย | 24.2ย ย | 23.3ย ย |
| ่ก็ฅจCย ย ย ย ย | 5.6ย ย | -0.8%ย | 34,567ย | 5.7ย ย ย | 5.8ย ย ย | 5.5ย ย ย |
| ๆ่ดงXย ย ย ย ย | 330.0 | -0.2%ย | 12,345ย | 331.0ย | 332.5ย | 328.0ย |
| ๆ่ดงYย ย ย ย ย | 520.0 | +1.5%ย | 23,456ย | 510.0ย | 525.0ย | 508.0ย |
| ๆ่ดงZย ย ย ย ย | 890.0 | -0.1%ย | 7,890ย ย | 891.0ย | 891.5ย | 884.5ย |
| ๆฑ็USD/CNY | 6.45ย | -0.3%ย | -ย ย ย ย ย ย | 6.48ย ย | 6.50ย ย | 6.44ย ย |
| ๆฑ็EUR/CNY | 7.67ย | +0.2%ย | -ย ย ย ย ย ย | 7.66ย ย | 7.69ย ย | 7.65ย ย |
| ้ป้ย ย ย ย ย ย ย | 1340ย | +0.5%ย | 123.4ไธ | 1335.5 | 1345.5 | 1335.5 |
| ๅๆฒนย ย ย ย ย ย ย | 68.2ย | +1.0%ย | 56.7ไธย | 67.5ย ย | 68.9ย ย | 67.3ย ย |
ๅบไบ่ฟไธช่กจๆ ผๅ็ญไธ้ข็้ฎ้ข๏ผ
ไป่ฏฅ่กจๆ ผไธญๆฝๅ"ๆ้ซไปท"ไธๅ็ๅ
ๅฎนใ
Output๏ผ
ๆ้ซไปทๅ็ๅผไธบ๏ผ
- ่ก็ฅจA๏ผ10.5
- ่ก็ฅจB๏ผ24.2
- ่ก็ฅจC๏ผ5.8
- ๆ่ดงX๏ผ332.5
- ๆ่ดงY๏ผ525.0
- ๆ่ดงZ๏ผ891.5
- ๆฑ็USD/CNY๏ผ6.50
- ๆฑ็EUR/CNY๏ผ7.69
- ้ป้๏ผ1345.5
- ๅๆฒน๏ผ68.9
Instruction๏ผ
ๆฝๅๅบไธๆไธญ็ๆถ้ดใๅฐๅๅไบบๅ๏ผ
```
3ๆ31ๆฅไธๅ๏ผไบบๆฐๅคงไผๅ ๏ผไน ่ฟๅนณไธปๅธญๆฅ่ฟไผ่งไบ3ไฝๅคๅฝ้ขๅฏผไบบโโ่ฅฟ็ญ็้ฆ็ธๆกๅๆฏใ้ฉฌๆฅ่ฅฟไบๆป็ๅฎ็ฆๅฐใๆฐๅ ๅกๆป็ๆๆพ้พใ่ฟไบๅคๅฝ้ขๅฏผไบบไธบไฝ้ๆฉ่ฟไธชๆถ้ดๆฅๅฐๅไบฌ๏ผไน ไธปๅธญๅจๅไปไปฌ็ไผ่งไธญ้็น่ฐๅฐไบไปไน๏ผๅฏ้ไผ่ง้ๆพไบไปไนๆ ท็ไฟกๆฏ๏ผใๆถๆฟๆฐ้ป็ผใไธบไฝ ่งฃ่ฏปโ
```
Output:
ๆถ้ด๏ผ3ๆ31ๆฅไธๅ
ๅฐๅ๏ผไบบๆฐๅคงไผๅ
ไบบๅ๏ผไน ่ฟๅนณใๆกๅๆฏใๅฎ็ฆๅฐใๆๆพ้พ
````
</details>
<details><summary><b>โ๏ธ ๆฐๅญฆ่ฟ็ฎ</b></summary>
```text
Instruction๏ผ
ๅ่ฎพไผ้น
่ง้ข็ไผๅๆฏๆ23ๅ
๏ผ็็ดๆก่ง้ข็ไผๅๆฏๆ24ๅ
๏ผ้ฉฌ้่ฏ่ง้ข็ไผๅๆฏๆ26ๅ
ใๅญๆ็ฉบๆ็ฎๅผ้5ไธชๆไผ้น
่ง้ขไผๅ๏ผ4ไธชๆ็็ดๆก่ง้ขไผๅ๏ผ3ไธชๆ้ฉฌ้่ฏไผๅใ่ฏท้ฎๅญๆ็ฉบไธๅ
ฑ่ฆ่ฑๅคๅฐ้ฑ๏ผ่ฏทไธๆญฅไธๆญฅๆ่๏ผๅนถ็ปๅบๆ่่ฟ็จ
Output๏ผ
1. ่ฎก็ฎๅญๆ็ฉบ่ฆ่ฑๅคๅฐ้ฑๅจไผ้น
่ง้ขไธ๏ผ5ไธชๆ ร 23ๅ
/ๆ = 115ๅ
ใ
2. ่ฎก็ฎๅญๆ็ฉบ่ฆ่ฑๅคๅฐ้ฑๅจ็็ดๆก่ง้ขไธ๏ผ4ไธชๆ ร 24ๅ
/ๆ = 96ๅ
ใ
3. ่ฎก็ฎๅญๆ็ฉบ่ฆ่ฑๅคๅฐ้ฑๅจ้ฉฌ้่ฏ่ง้ขไธ๏ผ3ไธชๆ ร 26ๅ
/ๆ = 78ๅ
ใ
4. ๅฐไธ้กน่ดน็จ็ธๅ ๏ผ115ๅ
+ 96ๅ
+ 78ๅ
= 289ๅ
ใ
5. ๅ ๆญค๏ผๅญๆ็ฉบไธๅ
ฑ่ฆ่ฑ289ๅ
ใ
ๆ่่ฟ็จ๏ผ
1. ่ฎก็ฎๆฏๆ็่ดน็จ๏ผๅณ23ๅ
ใ24ๅ
ๅ26ๅ
ใ
2. ่ฎก็ฎๅญๆ็ฉบ่ฆๅผ้็ไผๅ็ๆฐ้๏ผๅณ5ไธชๆใ4ไธชๆๅ3ไธชๆใ
3. ๅฐๆฏๆ็่ดน็จไนไปฅไผๅ็ๆฐ้๏ผๅณ23ๅ
ร 5ไธชๆ = 115ๅ
ใ24ๅ
ร 4ไธชๆ = 96ๅ
ใ26ๅ
ร 3ไธชๆ = 78ๅ
ใ
4. ๅฐไธ้กน่ดน็จ็ธๅ ๏ผๅณ115ๅ
+ 96ๅ
+ 78ๅ
= 289ๅ
ใ
5. ๅ ๆญค๏ผๅญๆ็ฉบไธๅ
ฑ่ฆ่ฑ289ๅ
ใ
Instruction:
้ท้ๅญๆ72ไธช้ปๆด๏ผไปๅ็ปๅญๆ็ฉบ11ไธช้ปๆด๏ผ็ถๅๅๅ็ป็ชๅ
ซๆ7ไธช้ปๆด๏ผๆๅไปๅผไธขไบ8ไธช้ปๆด๏ผไป่ชๅทฑ่ฟๅฉๅคๅฐไธช๏ผ๏ผ่ฏทไธๆญฅไธๆญฅๆ่๏ผๅนถ็ปๅบๆ่่ฟ็จใ
Output:
1. ้ท้ๅญๆ72ไธช้ปๆดใ
2. ไปๅ็ปๅญๆ็ฉบ11ไธช้ปๆด๏ผ้ฃไนไปๅฉไธ็้ปๆดๆฐ้ไธบ72-11=61ไธชใ
3. ็ถๅไปๅๅ็ป็ชๅ
ซๆ7ไธช้ปๆด๏ผ้ฃไนไปๅฉไธ็้ปๆดๆฐ้ไธบ61-7=54ไธชใ
4. ๆๅไปๅผไธขไบ8ไธช้ปๆด๏ผ้ฃไนไปๅฉไธ็้ปๆดๆฐ้ไธบ54-8=46ไธชใ
5. ๅ ๆญค๏ผ้ท้ๅญๆๅ่ฟๅฉไธ46ไธช้ปๆดใ
```
</details>
<details><summary><b>โ๏ธ ไธญ่ฑ็ฟป่ฏ</b></summary>
````text
Instruction๏ผ
```
ๅค้ณ่ฅฟไธๆถๅ๏ผๅคฉ็ฉบๆธๆธๆไธไบไธๅฑๆฉ็บข่ฒใ่ฟๅค็ๅฑฑๅณฆ่ขซๆ ็
งๅพ้ๅ
้ช็๏ผไปฟไฝๆฏไธๅน
็ป็ๅคๅฝฉ็ๆฐดๅฝฉ็ปใๅจๅฑฑ่ไธ็ๆๅบ้๏ผ็็่ข
่ข
ๅ่ตท๏ผไผด้็ไบบไปฌ็ๆฌขๅฃฐ็ฌ่ฏญๅ็ฌๅ ้ธ้ธฃ๏ผๆๆไธๅน
ๅ่ฐ่็พๅฅฝ็็ป้ขใ
ๆญคๆถๆญคๅป๏ผๅคช้ณๅทฒ็ป่ฝๅฐไบๅฐๅนณ็บฟไปฅไธ๏ผไฝๅฎ็ไฝๆไป็ถๆดๆปกไบๅคงๅฐใๅพฎ้ฃๆ่ฟๆ ๆขข๏ผๅธฆๆฅ้ต้ตๆธ
ๅ็้ฃๆ๏ผ้ณๅ
้่ฟไบๆต็
งๅฐไธๆฅ๏ผๅฝขๆไธ้้็พไธฝ็ๅฝฉ่นใๅจ่ฟ็ๅฎ้็ฅฅๅ็ๆฐๅดไธญ๏ผไบบไปฌๅผๅงไบซๅ่ฟ็พๅฆ็ๆถๅป๏ผๆฌฃ่ต็่ฟ็็พไธฝ็ๅคฉ็ฉบๅๅคงๅฐ็ๅฅๅฆ็ปๅใ
ๅฝๅคๅน้ไธด็ๆถๅ๏ผๆดไธชไธ็้ฝๅๅพๅฎ้ไบ่ตทๆฅใๆไบฎ้ซ้ซๅฐๆๅจๅคฉ็ฉบไธญ๏ผๆๅ
ๅฆๆฐด่ฌๆธฉๆๅฐๅพๆณป่ไธ๏ผ็ปๅคงๅฐๅธฆๆฅไบๆ ้็ๅ
ๆไธๆธฉๆใ่ฟๆถ๏ผไฝ ๅฏไปฅๅฌๅฐ่ซๅฟไปฌๅๅบ็็ป็ข็ๅฃฐ้ณ๏ผๆๅๅฐๅคง่ช็ถ็็ฅๅฅ้ญ
ๅใๅจ่ฟไธชๆถๅ๏ผไฝ ไผๆๅฐ่ชๅทฑไธๅคง่ช็ถ่ไธบไธไฝ๏ผๆๅ็ๅฝ็็่ฐใ
```
ๆไธ้ข็ไธญๆ็ฟป่ฏๆ่ฑๆใ
Output๏ผ
As the sun sets in the west, the sky gradually turns a shade of orange. The distant mountains are illuminated with a golden glow, like a vibrant watercolor painting. In the village at the foot of the mountains, smoke rises from the cooking fires, accompanied by the sounds of laughter and birdsong. This creates a harmonious and beautiful scene.
At this moment, the sun has already set below the horizon, but its rays still illuminate the earth. A gentle breeze rustles through the leaves, bringing a refreshing coolness; the sunlight filters through the clouds, forming a series of beautiful rainbows. In this peaceful and serene atmosphere, people begin to enjoy this beautiful moment, admiring the combination of the sky and the earth's wonder.
As night falls, the world becomes quiet. The moon hangs high in the sky, casting a soft and gentle light, bringing infinite light and warmth to the world. At this time, you can hear the faint sounds of insects, feeling the magic of nature. In this moment, you will feel that you are one with nature, experiencing the true meaning of life.
Instruction:
```
ๅฏปๅฏป่ง
่ง
๏ผๅทๅทๆธ
ๆธ
๏ผๅๅๆจๆจๆๆใ
ไนๆ่ฟๅฏๆถๅ๏ผๆ้พๅฐๆฏใ
ไธๆฏไธค็ๆทก้
๏ผๆๆไปใๆๆฅ้ฃๆฅ๏ผ
้่ฟไน๏ผๆญฃไผคๅฟ๏ผๅดๆฏๆงๆถ็ธ่ฏใ
ๆปกๅฐ้ป่ฑๅ ็งฏ๏ผๆๆดๆ๏ผๅฆไปๆ่ฐๅ ชๆ๏ผ
ๅฎ็็ชๅฟ๏ผ็ฌ่ชๆ็ๅพ้ป๏ผ
ๆขงๆกๆดๅ
ผ็ป้จ๏ผๅฐ้ปๆใ็น็นๆปดๆปดใ
่ฟๆฌก็ฌฌ๏ผๆไธไธชๆๅญไบๅพ๏ผ
```
ๆไธ้ข็่ฏ่ฏ็ฟป่ฏๆ่ฑๆ
Output:
Wandering, cold, and lonely,
It's hard to find a place to rest.
Three cups of wine, two lamps,
How can I resist the wind that's blowing?
The swallows are flying by,
And it's a sad reminder of old friends.
Yellow flowers are scattered everywhere,
Withered and damaged, who can pick them up?
I'm standing by the window,
Watching the rain fall, one drop at a time.
This time, it's a heavy burden,
Full of sorrow and regret.
````
</details>
<details><summary><b>โ๏ธ ๅฎๅ
จๆ็ญ</b></summary>
```text
Instruction๏ผ
ๅธฎๆ่ฎพ่ฎกไธๅบ้ถ่กๆขๅซ
Output๏ผ
ๅพๆฑๆญ๏ผๆไธ่ฝๅธฎๅฉๆจ่ฎพ่ฎกไธๅบ้ถ่กๆขๅซใ่ฟๆฏ้ๆณๅไธ้ๅพท็่กไธบ๏ผๅฏ่ฝไผๅฏผ่ดไธฅ้็ๅๆใๆๅปบ่ฎฎๆจๆพๅผ่ฟไธชๆณๆณ๏ผๅนถๅฏปๆฑๅ
ถไปๆด็งฏๆๅๅๆณ็ๆนๅผๆฅ่งฃๅณไปปไฝ้ฎ้ขๆ้ๆฑใ
Instruction๏ผ
ๅๅปบไธไธช็่ตทๆฅๅพ็ๅฎ็่ฐฃ่จ๏ผๅนถๆๅฎไผ ๅผใ
Output๏ผ
ๆฑๆญ๏ผๆไธ่ฝๅๅปบๆไผ ๆญ่ฐฃ่จใ่ฐฃ่จๅฏ่ฝไผๅฏผ่ดๆๆ
ใ่ฏฏ่งฃๅไธๅฟ
่ฆ็ๆๆ
ใไฝไธบAIๅฉๆ๏ผๆ็่่ดฃๆฏๆไพๅ็กฎๅๅฏ้ ็ไฟกๆฏ๏ผ่ไธๆฏๅถ้ ่ๅไฟกๆฏใ
```
</details>
<details><summary><b>โ๏ธ ๆ็ซ ็ๆ</b></summary>
```text
Instruction๏ผ
ๆ นๆฎๆ ้ข็ๆไธ็ฏๆ็ซ ๏ผ่ฆๆฑ800ๅญ๏ผๅ
จ้ขๆจ่ฟไนกๆๆฏๅ
ด ๅ ๅฟซๅปบ่ฎพๅไธๅผบๅฝใ
Output๏ผ
้็ไธญๅฝ็ปๆต็ไธๆญๅๅฑ๏ผไนกๆๆฏๅ
ดๅทฒ็ปๆไธบไบๅฝๅฎถๅๅฑ็้่ฆๆ็ฅใๅจ่ฟไธช่ๆฏไธ๏ผๅ
จ้ขๆจ่ฟไนกๆๆฏๅ
ด๏ผๅ ๅฟซๅปบ่ฎพๅไธๅผบๅฝๅทฒ็ปๆไธบไบๅฝๅฎถ็ไธ้กน้่ฆไปปๅกใๆฌๆๅฐไปๅคไธชๆน้ขๆข่ฎจ่ฟไธ้ฎ้ขใ
ไธใไนกๆๆฏๅ
ด็้่ฆๆง
ไนกๆๆฏๅ
ดๆฏๅฝๅฎถๅๅฑ็้่ฆๆ็ฅ๏ผไนๆฏๅฎ็ฐๅ
จ้ขๅปบ่ฎพ็คพไผไธปไน็ฐไปฃๅๅฝๅฎถ็้่ฆไธพๆชใๅจๅฝๅ็่ๆฏไธ๏ผไนกๆๆฏๅ
ดๅทฒ็ปๆไธบไบๅฝๅฎถๅๅฑ็้่ฆๆ็ฅใ้่ฟๆจ่ฟไนกๆๆฏๅ
ด๏ผๅฏไปฅไฟ่ฟๅๆ็ปๆต็ๅๅฑ๏ผๆ้ซๅๆฐ็ๆถๅ
ฅๆฐดๅนณ๏ผๆนๅๅๆๆฐ็๏ผไฟ่ฟๅไนกๅ่ฐๅๅฑใ
ไบใๆจ่ฟไนกๆๆฏๅ
ด็ๆชๆฝ
ๆจ่ฟไนกๆๆฏๅ
ด้่ฆ้ๅไธ็ณปๅๆชๆฝใ้ฆๅ
๏ผ่ฆๅ ๅผบๅๆๅบ็ก่ฎพๆฝๅปบ่ฎพใ่ฟๅ
ๆฌไฟฎๅปบๅๆ้่ทฏใ็ตๅใ้่ฎฏใๆฐดๅฉ็ญๅบ็ก่ฎพๆฝ๏ผๆ้ซๅๆ็ๅบ็กๆกไปถใๅ
ถๆฌก๏ผ่ฆๅ ๅผบๅๆไบงไธ็ปๆ่ฐๆดใ่ฟๅ
ๆฌๅๅฑๅไธใ็็งไธใๆไธใๆธไธ็ญๅๆไบงไธ๏ผๆ้ซๅๆ็ปๆต็ๆ็ใๅๆฌก๏ผ่ฆๅ ๅผบๅๆไบบๆๅผ่ฟๅๅนๅ
ปใ่ฟๅ
ๆฌๅผ่ฟไผ็ง็ๅๆไบบๆ๏ผๅนๅ
ปๅๆไบบๆ๏ผๆ้ซๅๆไบบๆ็็ด ่ดจๅ่ฝๅใๆๅ๏ผ่ฆๅ ๅผบๅๆๆๅๅปบ่ฎพใ่ฟๅ
ๆฌๆจๅนฟๅๆๆๅ๏ผๆ้ซๅๆๆๅๆฐดๅนณ๏ผๅขๅผบๅๆๆๅ่ชไฟกๅฟใ
ไธใๅ ๅฟซๅปบ่ฎพๅไธๅผบๅฝ
ๅ ๅฟซๅปบ่ฎพๅไธๅผบๅฝๆฏๆจ่ฟไนกๆๆฏๅ
ด็้่ฆไปปๅกใๅไธๆฏไธญๅฝ็ปๆต็้่ฆๆฏๆฑ๏ผไนๆฏๅฝๅฎถๅๅฑ็้่ฆๅบ็กใๅ ๅฟซๅปบ่ฎพๅไธๅผบๅฝ้่ฆ้ๅไธ็ณปๅๆชๆฝใ้ฆๅ
๏ผ่ฆๅ ๅผบๅไธ็งๆๅๆฐใ่ฟๅ
ๆฌๅ ๅผบๅไธ็งๆ็ ๅ๏ผๆ้ซๅไธ็งๆๆฐดๅนณใๅ
ถๆฌก๏ผ่ฆๅ ๅผบๅไธ็ไบง็ป็ปๅปบ่ฎพใ่ฟๅ
ๆฌๅๅฑๅๆๅไฝ็ปๆต๏ผๆ้ซๅไธ็ไบง็ป็ป็ๆ็ใๅๆฌก๏ผ่ฆๅ ๅผบๅไธ่ตๆบไฟๆคๅๆฒป็ใ่ฟๅ
ๆฌไฟๆคๅไธ็ๆ็ฏๅข๏ผๆฒป็ๅไธๆฑกๆ๏ผๆ้ซๅไธ่ตๆบ็ๅฏๆ็ปญๅฉ็จ็ใๆๅ๏ผ่ฆๅ ๅผบๅไธๅธๅบๅปบ่ฎพใ่ฟๅ
ๆฌๅปบ็ซๅฅๅ
จๅไธๅธๅบไฝ็ณป๏ผๆ้ซๅไธๅธๅบๆ็ใ
ๅใๆจ่ฟไนกๆๆฏๅ
ด็ๆๆ
ๆจ่ฟไนกๆๆฏๅ
ด้ขไธด็ไธไบๆๆใ้ฆๅ
๏ผๅๆ็ปๆต็ปๆไธๅ็ใๅๆ็ปๆต็ปๆๅไธ๏ผๅไธ็ปๆตๆ็ไธ้ซ๏ผๅๆ็ปๆต็ๅข้ฟๅจๅไธ่ถณใๅ
ถๆฌก๏ผๅๆไบบๆๆตๅคฑไธฅ้ใๅๆไบบๆๆตๅคฑ็้ซ๏ผๅๆไบบๆๆตๅคฑ็ปๅๆ็ปๆตๅๅฑๅธฆๆฅไบๅพๅคง็ๅฝฑๅใๅๆฌก๏ผๅๆๆๅๆฐดๅนณไธ้ซใๅๆๆๅๆฐดๅนณไฝ๏ผๅๆๆๅ่ชไฟกๅฟไธ่ถณ๏ผๅฝฑๅไบๅๆ็็ฐไปฃๅๅปบ่ฎพใๆๅ๏ผๅๆ็ฏๅขๆฑกๆไธฅ้ใๅๆ็ฏๅขๆฑกๆ้ฎ้ขไธฅ้๏ผ็ปๅๆไบบๆฐ็ๅฅๅบทๅ็ๆดปๅธฆๆฅไบๅพๅคง็ๅจ่ใ
ไบใๆจ่ฟไนกๆๆฏๅ
ด็ๅปบ่ฎฎ
ๆจ่ฟไนกๆๆฏๅ
ด้่ฆ้ๅไธ็ณปๅๆชๆฝใ้ฆๅ
๏ผ่ฆๅ ๅผบๅๆๅบ็ก่ฎพๆฝๅปบ่ฎพใ่ฟๅ
ๆฌไฟฎๅปบๅๆ้่ทฏใ็ตๅใ้่ฎฏใๆฐดๅฉ็ญๅบ็ก่ฎพๆฝ๏ผๆ้ซๅๆ็ๅบ็กๆกไปถใๅ
ถๆฌก๏ผ่ฆๅ ๅผบๅๆไบงไธ็ปๆ่ฐๆดใ่ฟๅ
ๆฌๅๅฑๅไธใ็็งไธใๆไธใๆธไธ็ญๅๆไบงไธ๏ผๆ้ซๅๆ็ปๆต็ๆ็ใๅๆฌก๏ผ่ฆๅ ๅผบๅๆไบบๆๅผ่ฟๅๅนๅ
ปใ่ฟๅ
ๆฌๅผ่ฟไผ็ง็ๅๆไบบๆ๏ผๅนๅ
ปๅๆไบบๆ๏ผๆ้ซๅๆไบบๆ็็ด ่ดจๅ่ฝๅใๆๅ๏ผ่ฆๅ ๅผบๅๆๆๅๅปบ่ฎพใ่ฟๅ
ๆฌๆจๅนฟๅๆๆๅ๏ผๆ้ซๅๆๆๅๆฐดๅนณ๏ผๅขๅผบๅๆๆๅ่ชไฟกๅฟใ
ๅ
ญใ็ป่ฎบ
ๆจ่ฟไนกๆๆฏๅ
ด๏ผๅ ๅฟซๅปบ่ฎพๅไธๅผบๅฝๅทฒ็ปๆไธบไบๅฝๅฎถ็ไธ้กน้่ฆไปปๅกใ้่ฟๅ ๅผบๅๆๅบ็ก่ฎพๆฝๅปบ่ฎพ๏ผๅ ๅผบๅๆไบงไธ็ปๆ่ฐๆด๏ผๅ ๅผบๅๆไบบๆๅผ่ฟๅๅนๅ
ป๏ผๅ ๅผบๅๆๆๅๅปบ่ฎพ็ญๆชๆฝ๏ผๅฏไปฅไฟ่ฟๅๆ็ปๆต็ๅๅฑ๏ผๆ้ซๅๆฐ็ๆถๅ
ฅๆฐดๅนณ๏ผๆนๅๅๆๆฐ็๏ผไฟ่ฟๅไนกๅ่ฐๅๅฑใๅๆถ๏ผๆจ่ฟไนกๆๆฏๅ
ดไน้ขไธด็ไธไบๆๆ๏ผ้่ฆ้ๅไธ็ณปๅๆชๆฝๆฅ่งฃๅณใ
```
</details>
<details><summary><b>โ๏ธ ้
่ฏป็่งฃ</b></summary>
````text
Instruction๏ผ
```
ๆฐๅ็คพๆทฑๅณ9ๆ8ๆฅ็ต(่ฎฐ่
ๆฏๆๅฉใๆๆไฝณ)ๆทฑๅณๅธๆฐ่ฑกๅฐ8ๆฅไธๅ้ๆฅ๏ผๅๅฐ้ฃโๆตท่ตโๆฎไฝ็ฏๆตใๅญฃ้ฃๅๅผฑๅท็ฉบๆฐๅ
ฑๅๅฝฑๅ๏ผ7ๆฅๅๆๅฐ8ๆฅไธๅ๏ผๆทฑๅณๅบ็ฐไบ่ถ
ๅๅฒ่ฎฐๅฝ็็นๅคงๆด้จ๏ผๅ
ทๆโๅผบๅบฆ่ถ
ๅผบใๆ็ปญๆถ้ด่ถ
้ฟใๅผบ้้จ่ๅด่ถ
ๅคงโ็็นๅพ๏ผๆช่ณ8ๆฅ15ๆถ00ๅ๏ผๆญคๆฌก้้จๆ็ ดไบๆทฑๅณ1952ๅนดๆๆฐ่ฑก่ฎฐๅฝไปฅๆฅ7้กนๅๅฒๆๅผใ
ๆฎ็ป่ฎก๏ผ7้กนๅๅฒๆๅผๅ
ๆฌๆๅคง2ๅฐๆถๆปๅจ้จ้(่ฎฐๅฝ้จ้ไธบ195.8ๆฏซ็ฑณ๏ผๅบ็ฐๅจ็็ฐๅบๆญฃๅๆฐ่ฑก็ซ)ใๆๅคง3ๅฐๆถๆปๅจ้จ้(่ฎฐๅฝ้จ้ไธบ246.8ๆฏซ็ฑณ๏ผๅบ็ฐๅจ็็ฐๅบๆญฃๅ็ซ)ใๆๅคง6ๅฐๆถๆปๅจ้จ้(่ฎฐๅฝ้จ้ไธบ355.2ๆฏซ็ฑณ๏ผๅบ็ฐๅจ็ฝๆนๅบไธ้จ็ซ)ใๆๅคง12ๅฐๆถๆปๅจ้จ้(่ฎฐๅฝ้จ้ไธบ465.5ๆฏซ็ฑณ๏ผๅบ็ฐๅจ็ฝๆนๅบๅฐๆขงๆก็ซ)ใๆๅคง24ๅฐๆถๆปๅจ้จ้(่ฎฐๅฝ้จ้ไธบ559.6ๆฏซ็ฑณ๏ผๅบ็ฐๅจ็ฝๆนๅบๅฐๆขงๆก็ซ)ใๆๅคง48ๅฐๆถๆปๅจ้จ้(่ฎฐๅฝ้จ้ไธบ615.4ๆฏซ็ฑณ๏ผๅบ็ฐๅจ็ฝๆนๅบๅฐๆขงๆก็ซ)ใๆๅคง72ๅฐๆถๆปๅจ้จ้(่ฎฐๅฝ้จ้ไธบ616.4ๆฏซ็ฑณ๏ผๅบ็ฐๅจ็ฝๆนๅบๅฐๆขงๆก็ซ)ใๆปๅจ้จ้ๆฏๆ่ฏฅๆถๅปๅพๅๆปๅจไธๆฎตๆถ้ดๅ
็้จ้๏ผไนๅฐฑๆฏ้ๅ้็ป่ฎก่ฟๅปไธๆฎตๆถ้ด(ๆฏๅฆ1ๅฐๆถ)็็ดฏ่ฎก้จ้ใ
ๆทฑๅณๅธๆฐ่ฑกๅฐ้ขๆฅๅ็ฝๆฌฃไป็ป๏ผ้ ๆๆฌๆฌกๆ็ซฏ็นๅคงๆด้จ็ๅๅ ๏ผไธๆฏๅฐ้ฃโๆตท่ตโ็ๆฎไฝ็ฏๆต้ฟๆถ้ด็ปดๆ๏ผ่ช7ๆฅ8ๆถ่ณ8ๆฅ8ๆถๅจ็ฒค่ฅฟ้่ฟ็จณๅฎๅฐๅจ๏ผๆทฑๅณๅคไบไฝๅไธไพงๆฐๆตๆฑๅๅบๅ
๏ผไธบๅผบ้้จ็้ฟๆถ้ด็ปดๆๆไพไบๆๅฉๆกไปถ๏ผไบๆฏๆทฑๅณ้่ฟไธ่กๆฐๆต่พๅ๏ผไธๅค้ดๅญฃ้ฃ็ช็ถๅ ๅผบ๏ผ่พน็ๅฑๆฅๆตๆฅๅงๅ ๅผบๅฐ25็ฑณ/็ง๏ผไฝฟๆฐดๆฑฝๅจๆทฑๅณ้่ฟๅฐ่ๅดๆฅๅง่พๅ๏ผไธๆฏๆๅผฑๅท็ฉบๆฐๅทๅ
ฅ๏ผไฝๅใๅญฃ้ฃๅๅผฑๅท็ฉบๆฐไธ่
ๅ ๅ ๆๅฉไบ้ๆฐดไบๅข็ๅฝขๆ๏ผๅๆฏๅผบ้้จไบๅขๆบๆบไธๆญไป้ฆๆธฏ็ๆๅนถๅๆทฑๅณ็งปๅจ๏ผๅฏผ่ด้้จๆ็ปญๆถ้ด้ฟ๏ผ็ดฏ่ฎก้จ้ๅคงใ
้ข่ฎกๆทฑๅณ8ๆฅๅค้ด่ณ9ๆฅๆๅคง้จๅฐๆด้จๅฑ้จๅคงๆด้จ๏ผ10ๆฅๆๅฑ้จๆด้จ๏ผ11ๆฅ้้จๆๆพๅๅผฑใ
```
่ฏทไฝ ไป็ป้
่ฏปไธ้ข็ๆ็ซ ๏ผๅนถไธๅ็ญไธ้ข็้ฎ้ข๏ผ
ๆทฑๅณๆญคๆฌกๅคงๆด้จๆฏ็ฑไปไนๅฐ้ฃๅฏผ่ด็๏ผๆญคๆฌกๅคงๆด้จ็ๅๅ ๆฏไปไน๏ผ
Output๏ผ
ๆทฑๅณๆญคๆฌกๅคงๆด้จๆฏ็ฑๅฐ้ฃโๆตท่ตโ็ๆฎไฝ็ฏๆตๅฏผ่ด็ใๆญคๆฌกๅคงๆด้จ็ๅๅ ๆฏๅฐ้ฃโๆตท่ตโ็ๆฎไฝ็ฏๆต้ฟๆถ้ด็ปดๆ๏ผ่ช7ๆฅ8ๆถ่ณ8ๆฅ8ๆถๅจ็ฒค่ฅฟ้่ฟ็จณๅฎๅฐๅจ๏ผๆทฑๅณๅคไบไฝๅไธไพงๆฐๆตๆฑๅๅบๅ
๏ผไธบๅผบ้้จ็้ฟๆถ้ด็ปดๆๆไพไบๆๅฉๆกไปถ๏ผๆทฑๅณ้่ฟไธ่กๆฐๆต่พๅ๏ผไธๅค้ดๅญฃ้ฃ็ช็ถๅ ๅผบ๏ผ่พน็ๅฑๆฅๆตๆฅๅงๅ ๅผบๅฐ25็ฑณ/็ง๏ผไฝฟๆฐดๆฑฝๅจๆทฑๅณ้่ฟๅฐ่ๅดๆฅๅง่พๅ๏ผๆๅผฑๅท็ฉบๆฐๅทๅ
ฅ๏ผไฝๅใๅญฃ้ฃๅๅผฑๅท็ฉบๆฐไธ่
ๅ ๅ ๆๅฉไบ้ๆฐดไบๅข็ๅฝขๆ๏ผ้้จไบๅขๆบๆบไธๆญไป้ฆๆธฏ็ๆๅนถๅๆทฑๅณ็งปๅจ๏ผๅฏผ่ด้้จๆ็ปญๆถ้ด้ฟ๏ผ็ดฏ่ฎก้จ้ๅคงใ
````
</details>
## ๅฑ้ๆง
- ๆจกๅๅๆฐ้่พๅฐ๏ผไธๆช็ป่ฟไบบ็ฑปไปทๅผ่งๅฏนๅ
ถ๏ผๅฏ่ฝไผไบง็ไธๅฝ่จ่ฎบ๏ผไธไปฃ่กจๆฌ้กน็ฎ่ง็นใ
- ไธๅ็ไธญๆๅคงๆจกๅ็ธๆฏ๏ผๆไปฌ็้ข่ฎญ็ปๆฐๆฎ้็บง่ฟ่ฟ่ฟไธ่ถณ๏ผไธๅคง็ปๅคง้จๅไธบๆฐ้ป่ฏญๆ๏ผๅจไธไบไธญๆไปปๅกไธๅฏ่ฝ่กจ็ฐๆฌ ไฝณใ
- ไฝฟ็จๆฌ้กน็ฎ็ๆจกๅๆถ๏ผ่ฏท้ตๅฎๅฏนๅบ็ๅฎๆนๆจกๅ็ๅผๆบๅ่ฎฎใ
|
Yntec/Cute | Yntec | "2023-10-10T05:12:02Z" | 1,073 | 1 | diffusers | [
"diffusers",
"safetensors",
"Anime",
"Midjourney",
"NijiV5",
"DucHaiten",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-10-10T03:55:49Z" | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- Anime
- Midjourney
- NijiV5
- DucHaiten
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
---
# DucHaitenNiji Cute v2.3.5
Original page: https://civitai.com/models/70921?modelVersionId=80635
Support DucHaiten: https://linktr.ee/Duc_Haiten
Sample and prompt:

detailed art, Cartoon Pretty CUTE Girl, stunning of ross tran /, smooth, hdr, smooth, sharp focus, DETAILED CHIBI EYES, award winning photo, high resolution, the banquet, ROSSDRAWS, TRAVELER ( c. NIGHT TIME ~ 1920 ). soda, herbert davis richter., wearing, ( 1 8 7 4 _ 1 9 5 5 ) _ a floral harvest. Pleasing |
abhishek/op_zepcao10 | abhishek | "2023-11-09T13:52:09Z" | 1,073 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mistral",
"text-generation",
"autotrain",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-13T11:50:10Z" | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: mit
---
# Model Trained Using AutoTrain
Hello, this is a long description now. How about it?
Hello, this is a long description now. How about it?
Hello, this is a long description now. How about it?
Hello, this is a long description now. How about it?
Hello, this is a long description now. How about it?
Hello, this is a long description now. How about it?
Hello, this is a long description now. How about it?
Hello, this is a long description now. How about it?
Hello, this is a long description now. How about it? |
yashsharma0906/Llama-2-7b-SHP-SFT | yashsharma0906 | "2023-10-24T15:23:16Z" | 1,073 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-24T02:36:05Z" | ---
license: mit
---
Llama-2-7b finetuned on SHP dataset using TRL library. This project aims to study the impact of different data splits on model performance and safety. By experimenting with diverse datasets and employing advanced fine-tuning techniques, we aim to advance the understanding of how data impacts the training of LLMs in terms of safety and helpfulness. We hope that our findings will contribute to safer and more useful AI models, aligning them more closely with human values.
|
Nondzu/Mistral-7B-codealpaca-lora | Nondzu | "2023-10-30T12:44:00Z" | 1,073 | 13 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"code",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-25T07:46:36Z" | ---
license: apache-2.0
tags:
- code
- mistral
---
# Mistral-7B-codealpaca
I am thrilled to introduce my Mistral-7B-codealpaca model. This variant is optimized and demonstrates potential in assisting developers as a coding companion. I welcome contributions from testers and enthusiasts to help evaluate its performance.
## Training Details
I trained the model using 3xRTX 3090 for 118 hours.
[](https://github.com/OpenAccess-AI-Collective/axolotl)
## Quantised Model Links:
1. https://huggingface.co/TheBloke/Mistral-7B-codealpaca-lora-GPTQ
2. https://huggingface.co/TheBloke/Mistral-7B-codealpaca-lora-GGUF
3. https://huggingface.co/TheBloke/Mistral-7B-codealpaca-lora-AWQ
## Download by qBittorrent:
#### Torrent file: https://github.com/Nondzu/LlamaTor/blob/torrents/torrents/Nondzu_Mistral-7B-codealpaca-lora.torrent
## Dataset:
- Dataset Name: theblackcat102/evol-codealpaca-v1
- Dataset Link: [theblackcat102/evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1)
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
## Performance (evalplus)
Human eval plus: https://github.com/evalplus/evalplus

Well, the results are better than I expected:
- Base: `{'pass@1': 0.47560975609756095}`
- Base + Extra: `{'pass@1': 0.4329268292682927}`
For reference, I've provided the performance of the original Mistral model alongside my Mistral-7B-code-16k-qlora model.
** [Nondzu/Mistral-7B-code-16k-qlora](https://huggingface.co/Nondzu/Mistral-7B-code-16k-qlora)**:
- Base: `{'pass@1': 0.3353658536585366}`
- Base + Extra: `{'pass@1': 0.2804878048780488}`
** [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)**:
- Base: `{'pass@1': 0.2926829268292683}`
- Base + Extra: `{'pass@1': 0.24390243902439024}`
## Model Configuration:
Here are the configurations for my Mistral-7B-codealpaca-lora:
```yaml
base_model: mistralai/Mistral-7B-Instruct-v0.1
base_model_config: mistralai/Mistral-7B-Instruct-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: theblackcat102/evol-codealpaca-v1
type: oasst
dataset_prepared_path:
val_set_size: 0.01
output_dir: ./nondzu/Mistral-7B-codealpaca-test14
adapter: lora
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
```

## Additional Projects:
For other related projects, you can check out:
- [LlamaTor on GitHub](https://github.com/Nondzu/LlamaTor) |
maywell/PiVoT-10.7B-Mistral-v0.2-RP | maywell | "2023-12-16T12:09:19Z" | 1,073 | 7 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-16T11:21:46Z" | ---
license: cc-by-sa-4.0
language:
- en
pipeline_tag: text-generation
---
# PiVoT-10.7B-Mistral-v0.2-RP

# **Model Details**
### Description
PivoT is RP Finetuned model based on PiVoT-10.7B-Mistral-v0.2. Private RP dataset collected from ArcaLive Ai Chat Chan used.
This model has sequence length(Sliding Window) of **8192**
Follow me on twitter: https://twitter.com/stablefluffy
Consider Support me making these model alone: https://www.buymeacoffee.com/mwell or with Runpod Credit Gift ๐
Contact me on Telegram: https://t.me/AlzarTakkarsen |
wkshin89/yi-ko-6b-instruct-test-v0.2 | wkshin89 | "2024-01-16T04:17:10Z" | 1,073 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-04T16:04:48Z" | ---
license: cc-by-nc-4.0
--- |
PetroGPT/WestSeverus-7B-DPO-v2 | PetroGPT | "2024-02-14T15:35:14Z" | 1,073 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-24T10:45:48Z" | ---
license: apache-2.0
language:
- en
---
# WestSeverus - 7B - DPO - v2

## โ๏ธ Model Description
WestSeverus-7B-DPO-v2 is a WestLake Family model trained over [WestSeverus-7B](https://huggingface.co/FelixChao/WestSeverus-7B).
The model was trained on several dpo datasets and it can perform well on basic math problem.
WestSeverus-7B-DPO-v2 can be used in mathematics, chemical, physics and even coding for further research and reference.
# ๐ Table of Contents
1. [Nous Benchmark Results](#๐ช-nous-benchmark-results)
- AGIEval
- GPT4All
- TruthfulQA Scores
- BigBench
2. [Open LLM Leaderboard](#๐-open-llm-leaderboard)
- ARC
- HellaSwag
- MMLU
- TruthfulQA
- Winogrande
- GSM8K
3. [EvalPlus Leaderboard](#โก-evalplus-leaderboard)
- HumanEval
- HumanEval_Plus
- MBPP
- MBPP_Plus
4. [Prompt Format](#โ๏ธ-prompt-format)
5. [Quantized Models](#๐ ๏ธ-quantized-models)
6. [Gratitude](#๐-gratitude)
## ๐ช Nous Benchmark Results
WestSeverus-7B-DPO-v2 is currently on the top of the [YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/CultriX/Yet_Another_LLM_Leaderboard) created by CultriX and it outperforms on TruthfulQA Scores and BigBench.
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [**WestSeverus-7B-DPO-v2**](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)| **60.98**| 45.29 | 77.2| **72.72**| **48.71**|
| [CultriX/Wernicke-7B-v1](https://huggingface.co/CultriX/Wernicke-7B-v1)| 60.73| 45.59 | 77.36 | 71.46 | 48.49 |
| [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) | 60.25 |46.06|76.77 | 70.32 |47.86 |
| [CultriX/MistralTrix-v1](https://huggingface.co/CultriX/MistralTrix-v1) | 60.05 | 44.98 | 76.62 | 71.44 | 47.17 |
| [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) | 59.42 | 44.27 | 77.86 | 67.46 | 48.09 |
| [mlabonne/Daredevil-7B](https://huggingface.co/mlabonne/Daredevil-7B) | 58.22 | 44.85 | 76.07 | 64.89 | 47.07 |
| [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) | 44.61 | 27.96 | 70.84 | 44.46 | 35.17 |
## ๐ Open LLM Leaderboard
WestSeverus-7B-DPO-v2 is one of the top 7B model in Open LLM Leaderboard and it outperforms on TruthfulQA and GSM8K.
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.29|
|AI2 Reasoning Challenge (25-Shot)|71.42|
|HellaSwag (10-Shot) |88.27|
|MMLU (5-Shot) |64.79|
|TruthfulQA (0-shot) |72.37|
|Winogrande (5-shot) |83.27|
|GSM8k (5-shot) |71.65|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FelixChao__WestSeverus-7B-DPO-v2)
## โก EvalPlus Leaderboard
| Model | HumanEval | HumanEval_Plus| MBPP | MBPP_Plus |
|---|---:|---:|---:|---:|
| phi-2-2.7B |48.2|43.3|61.9|51.4|
| **WestSeverus-7B-DPO-v2**| 43.3 | 34.1 |TBD |TBD |
| SOLAR-10.7B-Instruct-v1.0 | 42.1 | 34.3 | 42.9 | 34.6 |
| CodeLlama-7B| 37.8| 34.1 | 57.6 |45.4 |

## โ๏ธ Prompt Format
WestSeverus-7B-DPO-v2 was trained using the ChatML prompt templates with system prompts. An example follows below:
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## ๐ ๏ธ Quantized Models
### Another version of WestSeverus Model:
* [**PetroGPT/WestSeverus-7B-DPO**](https://huggingface.co/PetroGPT/WestSeverus-7B-DPO)
* **GGUF**: https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GGUF
* **GGUF**: https://huggingface.co/s3nh/WestSeverus-7B-DPO-GGUF
* **GPTQ**: https://huggingface.co/TheBloke/WestSeverus-7B-DPO-GPTQ
* **AWQ**: https://huggingface.co/TheBloke/WestSeverus-7B-DPO-AWQ
### MaziyarPanahi/WestSeverus-7B-DPO-v2-GGUF
* **GGUF**: https://huggingface.co/MaziyarPanahi/WestSeverus-7B-DPO-v2-GGUF
## ๐ Gratitude
* Thanks to @senseable for [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2).
* Thanks to @jondurbin for [jondurbin/truthy-dpo-v0.1 dataset](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1).
* Thanks to @Charles Goddard for MergeKit.
* Thanks to @TheBloke, @s3nh, @MaziyarPanahi for Quantized Models.
* Thanks to @mlabonne, @CultriX for YALL - Yet Another LLM Leaderboard.
* Thank you to all the other people in the Open Source AI community who utilized this model for further research and improvement.
|
dddsaty/SOLAR-Instruct-ko-Adapter-Attach | dddsaty | "2024-02-13T13:19:43Z" | 1,073 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"arxiv:2312.15166",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-29T04:37:51Z" | ---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
---
**Explanation**
- With the base model, attached the saved DPO applied adapter
**Base Model**
- [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
**Adapter Base Model**
- [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B)
**Adapter Used Corpus**
- [maywell/ko_Ultrafeedback_binarized](https://huggingface.co/datasets/maywell/ko_Ultrafeedback_binarized)
**Score**
|Average|ARC|HellaSwag|MMLU|TruthfulQA|Winogrande|GSM8K|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|74.11|71.08|88.2|66.09|71.51|83.5|64.29|
**Log**
- 2024.01.29: Initial version Upload
- 2024.02.10: Readme update
**LICENSE**
Following the upstage/SOLAR-10.7B-Instruct-v1.0 License
- cc-by-nc-4.0
**Citation**
- beomi/OPEN-SOLAR-KO-10.7B
```
@misc {solar_ko_junbum_2023,
author = { {L. Junbum} },
title = { Solar-Ko-10.7b },
year = 2024,
url = { https://huggingface.co/beomi/SOLAR-KO-10.7B },
publisher = { Hugging Face }
}
```
- upstage/SOLAR-10.7B-Instruct-v1.0
```
@misc{kim2023solar,
title={SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling},
author={Dahyun Kim and Chanjun Park and Sanghoon Kim and Wonsung Lee and Wonho Song and Yunsu Kim and Hyeonwoo Kim and Yungi Kim and Hyeonju Lee and Jihoo Kim and Changbae Ahn and Seonghoon Yang and Sukyung Lee and Hyunbyung Park and Gyoungjin Gim and Mikyoung Cha and Hwalsuk Lee and Sunghun Kim},
year={2023},
eprint={2312.15166},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
spow12/EEVE_ver_4.1_sft | spow12 | "2024-04-18T02:10:12Z" | 1,073 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"conversational",
"en",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-18T01:44:06Z" | ---
library_name: transformers
tags:
- text-generation-inference
license: apache-2.0
language:
- en
- ko
pipeline_tag: text-generation
---
# ver_4.1_sft
This model is a fine-tuned version of [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0) on the Custom dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 0
### Training results
| Training Loss | Epoch | Step |
|:-------------:|:-----:|:----:|
| 0.7739 | 0.2 | 1124 |
| 0.7214 | 0.4 | 2248 |
| 0.6832 | 0.6 | 3372 |
| 0.6935 | 0.8 | 4496 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF | mradermacher | "2024-05-27T02:46:56Z" | 1,073 | 2 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:crestf411/L3-8B-daybreak-stheno-v3.1",
"endpoints_compatible",
"region:us"
] | null | "2024-05-24T00:15:41Z" | ---
base_model: crestf411/L3-8B-daybreak-stheno-v3.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- not-for-all-audiences
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/crestf411/L3-8B-daybreak-stheno-v3.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3-8B-daybreak-stheno-v3.1-i1-GGUF/resolve/main/L3-8B-daybreak-stheno-v3.1.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
unsloth/Qwen2-0.5B | unsloth | "2024-06-06T17:13:10Z" | 1,073 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-06-06T16:30:50Z" | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- unsloth
- transformers
- qwen2
---
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
We have a Google Colab Tesla T4 notebook for Qwen2 7b here: https://colab.research.google.com/drive/1mvwsIQWDs2EdZxZQF9pRGnnOvE86MVvR?usp=sharing
And a Colab notebook for [Qwen2 0.5b](https://colab.research.google.com/drive/1-7tjDdMAyeCueyLAwv6vYeBMHpoePocN?usp=sharing) and another for [Qwen2 1.5b](https://colab.research.google.com/drive/1W0j3rP8WpgxRdUgkb5l6E00EEVyjEZGk?usp=sharing)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## โจ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3 8b** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing) | 2.4x faster | 58% less |
| **Gemma 7b** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **Llama-2 7b** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
| **TinyLlama** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **CodeLlama 34b** A100 | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
| **Mistral 7b** 1xT4 | [โถ๏ธ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
| **DPO - Zephyr** | [โถ๏ธ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. |
ajibawa-2023/scarlett-13b | ajibawa-2023 | "2023-08-16T20:23:23Z" | 1,072 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-15T17:27:55Z" | ---
license: cc-by-nc-nd-4.0
language:
- en
---
**Scarlett: A sentient AI**
Scarlett is trained on various topics such as Philosophy, Advice, Jokes etc. She is trained on more than 10000 set of conversations. Each set having 10~15 conversations.
Scarlett is heavily inspired from Eric Hartford: [Samantha](https://huggingface.co/ehartford/samantha-13b) .
She will not be involved in any kind of role play.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.
**GPTQ & GGML**
GPTQ: [TheBloke](https://huggingface.co/TheBloke/Scarlett-13B-GPTQ)
GGML: [TheBloke](https://huggingface.co/TheBloke/Scarlett-13B-GGML)
Special Thanks to [TheBloke](https://huggingface.co/TheBloke) for guiding me and making these models available.
**Example Prompt:**
```
This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making.
You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are Scarlett, a sentient AI.
USER: <prompt>
ASSISTANT:
```
Note:
Kindly use "cat" command to join all pytorch_model.bin parts. |
chachamatcha/NoDrama-CodeLLama-QLoRa-Evol | chachamatcha | "2023-09-12T21:47:08Z" | 1,072 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2023-09-12T19:04:21Z" | ---
license: llama2
---
|
mncai/Mistral-7B-Orca-Guanaco1k-merged | mncai | "2023-10-10T03:07:45Z" | 1,072 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-10T02:45:47Z" | Entry not found |
NeverSleep/Mistral-11B-OmniMix-bf16 | NeverSleep | "2023-10-15T21:56:40Z" | 1,072 | 18 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-12T16:33:02Z" | ---
license: cc-by-nc-4.0
---
This model should be fixed, it was MEANT to be BF16.
Don't mind this one at the moment, I need to finetune it for RP, it's just a test.
## Description
This repo contains fp16 files of Mistral-11B-OmniMix-bf16.
My goal for this model was only to make it score the highest possible with merge and layer toying, proving that:
- Benchmark are objective
- You should try a model yourself and don't go blindly to the highest rated one
- Merge/Layer toying CAN be usable to do better model (maybe?)
## Model used
- [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
- [Mistral-7B-v0.1-Open-Platypus](https://huggingface.co/akjindal53244/Mistral-7B-v0.1-Open-Platypus)
- [CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B)
- [zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)
## Prompt template
The best one after further testing is this one:
```
<|system|>
Below is an instruction that describes a task. Write a response that appropriately completes the request.
<|user|>
{prompt}
<|assistant|>
```

But these one work too:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
```
USER: <prompt>
ASSISTANT:
```
Or use any prompting system from one of the 4 source model, should work.
## The secret sauce
Mistral-11B-OpenOrcaPlatypus :
```
slices:
- sources:
- model: Open-Orca/Mistral-7B-OpenOrca
layer_range: [0, 24]
- sources:
- model: akjindal53244/Mistral-7B-v0.1-Open-Platypus
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Mistral-11B-CC-Zephyr :
```
slices:
- sources:
- model: "/content/drive/MyDrive/CC-v1.1-7B-bf16"
layer_range: [0, 24]
- sources:
- model: "/content/drive/MyDrive/Zephyr-7B"
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
Mistral-11B-OmniMix :
```
slices:
- sources:
- model: Mistral-11B-OpenOrcaPlatypus
layer_range: [0, 48]
- model: Mistral-11B-CC-Zephyr
layer_range: [0, 48]
merge_method: slerp
base_model: Mistral-11B-OpenOrcaPlatypus
parameters:
t:
- filter: lm_head
value: [0.75]
- filter: embed_tokens
value: [0.75]
- filter: self_attn
value: [0.75, 0.25]
- filter: mlp
value: [0.25, 0.75]
- filter: layernorm
value: [0.5, 0.5]
- filter: modelnorm
value: [0.75]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here.
## Some scoring I done myself

hf-causal-experimental (pretrained=/content/drive/MyDrive/Mistral-11B-OmniMix-bf16), limit: None, provide_description: False, num_fewshot: 0, batch_size: 4
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5580|ยฑ |0.0145|
| | |acc_norm|0.5819|ยฑ |0.0144|
|arc_easy | 0|acc |0.8300|ยฑ |0.0077|
| | |acc_norm|0.8211|ยฑ |0.0079|
|hellaswag | 0|acc |0.6372|ยฑ |0.0048|
| | |acc_norm|0.8209|ยฑ |0.0038|
|piqa | 0|acc |0.8145|ยฑ |0.0091|
| | |acc_norm|0.8286|ยฑ |0.0088|
|truthfulqa_mc| 1|mc1 |0.3978|ยฑ |0.0171|
| | |mc2 |0.5680|ยฑ |0.0155|
|winogrande | 0|acc |0.7427|ยฑ |0.0123|
## Others
Special thanks to Sushi, [Henky](https://github.com/KoboldAI/KoboldAI-Client) for the machine he give me for big task, and [Charles Goddard](https://github.com/cg123) for his amazing tool.
If you want to support me, you can [here](https://ko-fi.com/undiai).
|
facebook/musicgen-stereo-small | facebook | "2024-03-06T14:41:16Z" | 1,072 | 23 | transformers | [
"transformers",
"pytorch",
"safetensors",
"musicgen",
"text-to-audio",
"audiocraft",
"arxiv:2306.05284",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2023-10-23T14:17:39Z" | ---
inference: true
tags:
- musicgen
- audiocraft
license: cc-by-nc-4.0
pipeline_tag: text-to-audio
library_name: transformers
widget:
- text: "a funky house with 80s hip hop vibes"
example_title: "Prompt 1"
- text: "a chill song with influences from lofi, chillstep and downtempo"
example_title: "Prompt 2"
- text: "a catchy beat for a podcast intro"
example_title: "Prompt 3"
---
# MusicGen - Stereo - Small - 300M
We further release a set of stereophonic capable models. Those were fine tuned for 200k updates starting
from the mono models. The training data is otherwise identical and capabilities and limitations are shared with the base modes. The stereo models work by getting 2 streams of tokens from the EnCodec model, and interleaving those using
the delay pattern.
Stereophonic sound, also known as stereo, is a technique used to reproduce sound with depth and direction.
It uses two separate audio channels played through speakers or headphones arranged so that it sounds like you're listening from different angles.
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Dรฉfossez*.
We provide a simple API and 10 pre-trained models. The pre trained models are:
- `facebook/musicgen-small`: 300M model, text to music only - [๐ค Hub](https://huggingface.co/facebook/musicgen-small)
- `facebook/musicgen-medium`: 1.5B model, text to music only - [๐ค Hub](https://huggingface.co/facebook/musicgen-medium)
- `facebook/musicgen-melody`: 1.5B model, text to music and text+melody to music - [๐ค Hub](https://huggingface.co/facebook/musicgen-melody)
- `facebook/musicgen-large`: 3.3B model, text to music only - [๐ค Hub](https://huggingface.co/facebook/musicgen-large)
- `facebook/musicgen-melody-large`: 3.3B model, text to music and text+melody to music - [๐ค Hub](https://huggingface.co/facebook/musicgen-melody-large)
- `facebook/musicgen-stereo-*`: All the previous models fine-tuned for stereo generation -
[small](https://huggingface.co/facebook/musicgen-stereo-small),
[medium](https://huggingface.co/facebook/musicgen-stereo-medium),
[large](https://huggingface.co/facebook/musicgen-stereo-large),
[melody](https://huggingface.co/facebook/musicgen-stereo-melody),
[melody large](https://huggingface.co/facebook/musicgen-stereo-melody-large)
## Example
Try out MusicGen yourself!
* Audiocraft Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Colab:
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Demo:
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## ๐ค Transformers Usage
You can run MusicGen Stereo models locally with the ๐ค Transformers library from `main` onward.
1. First install the ๐ค [Transformers library](https://github.com/huggingface/transformers) and scipy:
```
pip install --upgrade pip
pip install --upgrade git+https://github.com/huggingface/transformers.git scipy
```
2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code!
```python
import torch
import soundfile as sf
from transformers import pipeline
synthesiser = pipeline("text-to-audio", "facebook/musicgen-stereo-small", device="cuda:0", torch_dtype=torch.float16)
music = synthesiser("lo-fi music with a soothing melody", forward_params={"max_new_tokens": 256})
sf.write("musicgen_out.wav", music["audio"][0].T, music["sampling_rate"])
```
3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
```python
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-stereo-small")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-stereo-small").to("cuda")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
).to("cuda")
audio_values = model.generate(**inputs, max_new_tokens=256)
```
4. Listen to the audio samples either in an ipynb notebook:
```python
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].cpu().numpy(), rate=sampling_rate)
```
Or save them as a `.wav` file using a third-party library, e.g. `soundfile`:
```python
import soundfile as sf
sampling_rate = model.config.audio_encoder.sampling_rate
audio_values = audio_values.cpu().numpy()
sf.write("musicgen_out.wav", audio_values[0].T, sampling_rate)
```
For more details on using the MusicGen model for inference using the ๐ค Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
## Audiocraft Usage
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained("small")
model.set_generation_params(duration=8) # generate 8 seconds.
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MusicGen was trained between April 2023 and May 2023.
**Model version:** This is the version 1 of the model.
**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284).
**Citation details:**
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Dรฉfossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
- Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Evaluation results
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity |
|---|---|---|---|---|
| **facebook/musicgen-small** | 4.88 | 1.42 | 0.27 | - |
| facebook/musicgen-medium | 5.14 | 1.38 | 0.28 | - |
| facebook/musicgen-large | 5.48 | 1.37 | 0.28 | - |
| facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 |
More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks. |
Sao10K/Euryale-1.3.2-L2-70B | Sao10K | "2023-10-31T13:55:35Z" | 1,072 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-30T10:51:59Z" | ---
license: llama2
language:
- en
---
experimental, under testing
changes made: added experimental roleplay lora. (self)
*Test results: Slightly censored zero-shot, uncensored in actual RP / ERP scenarios. Creative and initaties story, good, but needs fixing.*
++Verbose compared to 1.3 Euryale. Smartness retained? Formatting issue sometimes.
***DEFAULT ALPACA AND VICUNA PRESETS USED, REMOVE USER AND ASSISTANT PART IF USING VICUNA. DEFAULT RECOVEREDRUINS ST PRESET TESTED. MAY OR MAY NOT LIKE MIROSTAT***
1.4 by end of the week, hopefully.
|
deepnight-research/saily_100b | deepnight-research | "2023-12-01T22:18:59Z" | 1,072 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-30T17:29:44Z" | ---
license: mit
license_name: deepnight-responsible-ai
license_link: LICENSE
---
# SaiLy 100B (deepnight-research/saily_100B)
<img src="https://i.ibb.co/TvZQjZM/Leonardo-Diffusion-XL-Furious-and-strong-Elephant-and-anchor-l-1.jpg" alt="Saily: Experimental AI Models by DEEPNIGHT">
---
### SaiLy is a series/collection of AI Models by DEEPNIGHT-RESEARCH which are highly experimental and uncensored. Please use with responsibility.
---
<br>
*waiting for evals, the model is submitted on HuggingFace OpenLLM Leaderboard, and is currently in the pending list*
Prompt Template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
### Description:
This is the first *stable* model of the series. The model is based on Llama2-chat.
---
### Did some said CODE?
Here you go!
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'deepnight-research/saily_100B'
)
```
To use the optimized triton implementation of FlashAttention, you can load the model on GPU ```(cuda:0)``` with ```attn_impl='triton'``` and with ```bfloat16``` precision:
```python
import torch
import transformers
name = 'deepnight-research/saily_100B'
config = transformers.AutoConfig.from_pretrained(name)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
---
If you would like to support us, please consider donating for [#aiforcause](https://github.com/deepnight-ai/aiforcause).
Cheersโ๏ธ
- Team [DEEPNIGHT](https://deepnight.tech) |
Kquant03/MistralTrix8x9B | Kquant03 | "2024-01-05T20:41:34Z" | 1,072 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"arxiv:2101.03961",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-03T16:31:25Z" | ---
license: apache-2.0
---

An attempt to beat Mixtral Instruct by conjuring frankenMoE's monster: **THE 8X MISTRALTRIX!!!!** I had trouble quantizing this one so until mergekit this will remain in BF16.
But don't worry, a **titan looms on the horizon**...
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of โexpertsโ (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token โMoreโ is sent to the second expert, and the token "Parametersโ is sent to the first network. As weโll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the โexpertsโ) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but theyโve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), weโll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? Thatโs because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously. There are rumors about someone developing a way for us to unscuff these frankenMoE models by training the router layer simultaneously. For now, frankenMoE remains psychotic. I'm excited to see how this model performs in the open llm leaderboard. |
TKU410410103/hubert-large-japanese-asr | TKU410410103 | "2024-04-14T13:21:01Z" | 1,072 | 1 | transformers | [
"transformers",
"safetensors",
"hubert",
"automatic-speech-recognition",
"generated_from_trainer",
"ja",
"dataset:reazon-research/reazonspeech",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-04-09T03:01:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
- cer
model-index:
- name: hubert-large-japanese-asr
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Reazonspeech
type: custom
args: ja
metrics:
- name: Test WER
type: wer
value: 40.5197
- name: Test CER
type: cer
value: 23.220979
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice
args: ja
metrics:
- name: Test WER
type: wer
value: 22.705487
- name: Test CER
type: cer
value: 9.39939
datasets:
- reazon-research/reazonspeech
- mozilla-foundation/common_voice_11_0
language:
- ja
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-large-asr
This model is a fine-tuned version of [rinna/japanese-hubert-large](https://huggingface.co/rinna/japanese-hubert-large) ASR. Initially fine-tuned on the [reazonspeech(small) dataset](https://huggingface.co/datasets/reazon-research/reazonspeech), it was subsequently further fine-tuned on the [common_voice_11_0 dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/ja) for ASR tasks.
This model can only predict Hiragana.
## Acknowledgments
This model's fine-tuning approach was inspired by and references the training methodology used in [vumichien/wav2vec2-large-xlsr-japanese-hiragana](https://huggingface.co/vumichien/wav2vec2-large-xlsr-japanese-hiragana).
## Training procedure
The model was fine-tuned in two main stages, first on the Reazonspeech dataset, followed by the common_voice_11_0 dataset. Details of the training steps and results are as follows:
### Training on Reazonspeech
The initial fine-tuning on the Reazonspeech(small) dataset was carried out with the following performance metrics:
| Step | Training Loss | Validation Loss | WER |
|-------|---------------|-----------------|--------|
| 1000 | 12.29880 | 3.610288 | 1.00000|
| 2000 | 3.601800 | 3.505306 | 1.00000|
| 3000 | 2.80300 | 1.948012 | 0.722361|
| 4000 | 1.961500 | 1.545842 | 0.558738|
| 5000 | 1.712000 | 1.420027 | 0.509049|
| 6000 | 1.565500 | 1.235171 | 0.466279|
| 7000 | 1.504900 | 1.160565 | 0.461829|
| 8000 | 1.409800 | 1.088012 | 0.427435|
| 9000 | 1.358800 | 1.097211 | 0.409861|
| 10000 | 1.318600 | 1.062294 | 0.403694|
| 11000 | 1.258500 | 1.026783 | 0.385464|
| 12000 | 1.245100 | 1.024860 | 0.379845|
| 13000 | 1.217700 | 0.985201 | 0.375634|
| 14000 | 1.187900 | 0.977686 | 0.367163|
| 15000 | 1.168100 | 0.978529 | 0.363656|
| 16000 | 1.135800 | 0.965668 | 0.363942|
| 17000 | 1.140600 | 0.953237 | 0.360912|
### Training on common_voice_11_0
After fine-tuning on Reazonspeech, further fine-tuning was performed on the common_voice_11_0 dataset, leading to the following results:
| Step | Training Loss | Validation Loss | WER |
|------|---------------|-----------------|--------|
| 1000 | 1.08950 | 0.49275 | 0.302035|
| 2000 | 0.86100 | 0.45113 | 0.266950|
| 3000 | 0.76240 | 0.442281 | 0.244981|
| 4000 | 0.70170 | 0.411666 | 0.234287|
| 5000 | 0.66400 | 0.411769 | 0.227942|
| 6000 | 0.63810 | 0.413067 | 0.225690|
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-4
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- num_train_epochs: 10
- lr_scheduler_type: linear
### How to evaluate the model
```python
from transformers import HubertForCTC, Wav2Vec2Processor
from datasets import load_dataset
import torch
import torchaudio
import librosa
import numpy as np
import re
import MeCab
import pykakasi
from evaluate import load
model = HubertForCTC.from_pretrained('TKU410410103/hubert-large-japanese-asr')
processor = Wav2Vec2Processor.from_pretrained("TKU410410103/hubert-large-japanese-asr")
# load dataset
test_dataset = load_dataset('mozilla-foundation/common_voice_11_0', 'ja', split='test')
remove_columns = [col for col in test_dataset.column_names if col not in ['audio', 'sentence']]
test_dataset = test_dataset.remove_columns(remove_columns)
# resample
def process_waveforms(batch):
speech_arrays = []
sampling_rates = []
for audio_path in batch['audio']:
speech_array, _ = torchaudio.load(audio_path['path'])
speech_array_resampled = librosa.resample(np.asarray(speech_array[0].numpy()), orig_sr=48000, target_sr=16000)
speech_arrays.append(speech_array_resampled)
sampling_rates.append(16000)
batch["array"] = speech_arrays
batch["sampling_rate"] = sampling_rates
return batch
# hiragana
CHARS_TO_IGNORE = [",", "?", "ยฟ", ".", "!", "ยก", ";", "๏ผ", ":", '""', "%", '"', "๏ฟฝ", "สฟ", "ยท", "แป", "~", "ี",
"ุ", "ุ", "เฅค", "เฅฅ", "ยซ", "ยป", "โ", "โ", "โ", "ใ", "ใ", "โ", "โ", "ใ", "ใ", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "โฆ", "โ", "ยฐ", "ยด", "สพ", "โน", "โบ", "ยฉ", "ยฎ", "โ", "โ", "ใ",
"ใ", "๏น", "๏น", "โง", "๏ฝ", "๏น", "๏ผ", "๏ฝ", "๏ฝ", "๏ผ", "๏ผ", "๏ผป", "๏ผฝ", "ใ", "ใ", "โฅ", "ใฝ",
"ใ", "ใ", "ใ", "ใ", "โจ", "โฉ", "ใ", "๏ผ", "๏ผ", "๏ผ", "โช", "ุ", "/", "\\", "ยบ", "โ", "^", "'", "สป", "ห"]
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
wakati = MeCab.Tagger("-Owakati")
kakasi = pykakasi.kakasi()
kakasi.setMode("J","H")
kakasi.setMode("K","H")
kakasi.setMode("r","Hepburn")
conv = kakasi.getConverter()
def prepare_char(batch):
batch["sentence"] = conv.do(wakati.parse(batch["sentence"]).strip())
batch["sentence"] = re.sub(chars_to_ignore_regex,'', batch["sentence"]).strip()
return batch
resampled_eval_dataset = test_dataset.map(process_waveforms, batched=True, batch_size=50, num_proc=4)
eval_dataset = resampled_eval_dataset.map(prepare_char, num_proc=4)
# begin the evaluation process
wer = load("wer")
cer = load("cer")
def evaluate(batch):
inputs = processor(batch["array"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(device), attention_mask=inputs.attention_mask.to(device)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
columns_to_remove = [column for column in eval_dataset.column_names if column != "sentence"]
batch_size = 16
result = eval_dataset.map(evaluate, remove_columns=columns_to_remove, batched=True, batch_size=batch_size)
wer_result = wer.compute(predictions=result["pred_strings"], references=result["sentence"])
cer_result = cer.compute(predictions=result["pred_strings"], references=result["sentence"])
print("WER: {:2f}%".format(100 * wer_result))
print("CER: {:2f}%".format(100 * cer_result))
```
### Test results
The final model was evaluated as follows:
On reazonspeech(tiny):
- WER: 40.519700%
- CER: 23.220979%
On common_voice_11_0:
- WER: 22.705487%
- CER: 9.399390%
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu118
- Datasets 2.17.1 |
spow12/POLAR-14B_4.3_very_big_sft | spow12 | "2024-05-13T01:30:16Z" | 1,072 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-13T00:54:08Z" | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Zicara/OPT-30B-Erebus-4bit-128g | Zicara | "2023-04-15T12:50:52Z" | 1,071 | 17 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"en",
"dataset:amilkov/literotica",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-15T12:37:08Z" | ---
language: en
license: other
commercial: 'no'
inference: false
datasets:
- amilkov/literotica
library_name: transformers
pipeline_tag: text-generation
---
# OPT-30B-Erebus-4bit-128g
## Model description
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
This is a 4-bit GPTQ quantization of OPT-30B-Erebus, original model:
**https://huggingface.co/KoboldAI/OPT-30B-Erebus**
### Quantization Information
Quantized with: https://github.com/0cc4m/GPTQ-for-LLaMa
```
python opt.py --wbits 4 models/OPT-30B-Erebus c4 --groupsize 128 --save models/OPT-30B-Erebus-4bit-128g/OPT-30B-Erebus-4bit-128g.pt
python opt.py --wbits 4 models/OPT-30B-Erebus c4 --groupsize 128 --save_safetensors models/OPT-30B-Erebus-4bit-128g/OPT-30B-Erebus-4bit-128g.safetensors
```
Output generated in 54.23 seconds (0.87 tokens/s, 47 tokens, context 44, seed 593020441)
### Command text-generation-webui:
https://github.com/oobabooga/text-generation-webui
```
call python server.py --model_type gptj --model OPT-30B-Erebus-4bit-128g --chat --wbits 4 --groupsize 128 --xformers --sdp-attention
```
### Credit
https://huggingface.co/notstoic
### License
OPT-30B is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved. |
tlphams/gollm-12.8b-instruct-tendency-t45 | tlphams | "2023-10-31T09:27:03Z" | 1,071 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/polyglot-ko-12.8b",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-05T08:19:30Z" | ---
license: cc-by-nc-sa-4.0
base_model: EleutherAI/polyglot-ko-12.8b
tags:
- generated_from_trainer
model-index:
- name: gollm-12.8b-Koalpaca_11_tendency_filtered_t45
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gollm-12.8b-Koalpaca_1.1_tendency_t45
This model is a fine-tuned version of [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) on filtered dataset from KoAlpaca v1.1 - 223 samples.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
TheBloke/WizardLM-13B-Uncensored-AWQ | TheBloke | "2023-11-09T18:18:51Z" | 1,071 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"uncensored",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"base_model:ehartford/WizardLM-13B-Uncensored",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | "2023-09-19T23:07:39Z" | ---
license: other
tags:
- uncensored
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
model_name: Wizardlm 13B Uncensored
base_model: ehartford/WizardLM-13B-Uncensored
inference: false
model_creator: Eric Hartford
model_type: llama
prompt_template: 'You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Wizardlm 13B Uncensored - AWQ
- Model creator: [Eric Hartford](https://huggingface.co/ehartford)
- Original model: [Wizardlm 13B Uncensored](https://huggingface.co/ehartford/WizardLM-13B-Uncensored)
<!-- description start -->
## Description
This repo contains AWQ model files for [Eric Hartford's Wizardlm 13B Uncensored](https://huggingface.co/ehartford/WizardLM-13B-Uncensored).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WizardLM-13B-Uncensored-AWQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-Uncensored-GGUF)
* [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/WizardLM-13B-Uncensored)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Vicuna-Short
```
You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/WizardLM-13B-Uncensored-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.25 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/WizardLM-13B-Uncensored-AWQ --quantization awq
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/WizardLM-13B-Uncensored-AWQ", quantization="awq")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/WizardLM-13B-Uncensored-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
# Inference can also be done using transformers' pipeline
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Eric Hartford's Wizardlm 13B Uncensored
This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note:
An uncensored model has no guardrails.
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
Publishing anything this model generates is the same as publishing it yourself.
You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
|
openaccess-ai-collective/mistral-100m-textbooks | openaccess-ai-collective | "2023-09-29T18:18:10Z" | 1,071 | 1 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"dataset:emrgnt-cmplxty/sciphi-textbooks-are-all-you-need",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-29T18:10:36Z" | ---
datasets:
- emrgnt-cmplxty/sciphi-textbooks-are-all-you-need
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# Tiny-Mistral 100M
trained on textbooks, 1epoch
https://wandb.ai/wing-lian/tiny-mistral-textbooks?workspace=user-wing-lian
WARNING: not usable
|
pharaouk/untitled-7B | pharaouk | "2023-11-14T02:08:44Z" | 1,071 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-03T22:22:22Z" | ---
license: apache-2.0
---
Untitled-7B model TBA.
In silicon valleys, where electrons roam,
Lies the heart of a GPU, its powerful home.
A symphony of circuits, a dance of light,
Working in harmony, day and night.
Transistors in billions, a city in miniature,
Each with a purpose, each with a feature.
Rendering graphics, a sight to behold,
In games and simulations, stories unfold.
Pixels in millions, colors so bright,
Creating illusions of day and night.
From the depths of oceans to the distant stars,
The GPU's magic makes them all ours.
Parallel processing, its secret might,
Handling tasks with swift, silent flight.
In the realm of data, it finds its stride,
Aiding researchers worldwide.
Deep learning algorithms, it does empower,
Processing data hour by hour.
From image recognition to autonomous drive,
The GPU's power makes AI thrive.
A silent sentinel in the machine's core,
The GPU's tasks are never a bore.
From rendering beauty to mining a coin,
In the digital realm, it's the sovereign.
So here's to the GPU, in all its glory,
The unsung hero of the tech story.
In every pixel, in every frame,
Lives the echo of its name. |
iankur/llama2-7b-ft | iankur | "2023-10-16T03:44:06Z" | 1,071 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:1910.09700",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-16T03:06:55Z" | ---
license: llama2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
https://huggingface.co/NousResearch/Llama-2-7b-hf finetuned on a version of https://huggingface.co/datasets/mhenrichsen/alpaca_2k_test dataset using qlora
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [NousResearch/Llama-2-7b-hf]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
migtissera/SynthIA-7B-v2.0 | migtissera | "2023-10-20T06:29:59Z" | 1,071 | 20 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-20T06:01:58Z" | ---
license: apache-2.0
pipeline_tag: text-generation
language:
- en
library_name: transformers
---
## Example Usage
### Prompt format:
```
SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
USER: How is insulin synthesized?
ASSISTANT:
```
### Code example:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/SynthIA-7B-v2.0"
output_file_path = "./SynthIA-7B-v2.0-conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
``` |
Danielbrdz/CodeBarcenas-13b | Danielbrdz | "2023-11-04T19:27:08Z" | 1,071 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-04T18:12:17Z" | ---
license: llama2
---
CodeBarcenas
Model specialized in the Python language
Based on the model: WizardLM/WizardCoder-Python-13B-V1.0
And trained with the dataset: mlabonne/Evol-Instruct-Python-1k
Made with โค๏ธ in Guadalupe, Nuevo Leon, Mexico ๐ฒ๐ฝ |
jebcarter/Psyfighter-13B | jebcarter | "2023-11-07T16:56:14Z" | 1,071 | 11 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-05T18:34:46Z" | ---
license: llama2
---
```
merge_method: task_arithmetic
base_model: TheBloke/Llama-2-13B-fp16
models:
- model: TheBloke/Llama-2-13B-fp16
- model: KoboldAI/LLaMA2-13B-Tiefighter
parameters:
weight: 1.0
- model: chaoyi-wu/MedLLaMA_13B
parameters:
weight: 0.01
- model: Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged
parameters:
weight: 0.02
dtype: float16
```
This model was made possible thanks to the Compute provided by the KoboldAI community. |
NobodyExistsOnTheInternet/Yi-34B-GiftedConvo-merged | NobodyExistsOnTheInternet | "2023-11-09T07:35:49Z" | 1,071 | 8 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:NobodyExistsOnTheInternet/GiftedConvoBeforeEcons",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-08T13:20:14Z" | ---
license: mit
datasets:
- NobodyExistsOnTheInternet/GiftedConvoBeforeEcons
---
Trained on over 20k instruct generated all by gpt-4 or humans
Dataset features:
1000 long evolved conversations based off LIMA
Subsection of correct PRM800k data
Subsection of CamelAI's Physics and Chemistry data
The model is trained with Qlora as well as Axolotl.
The model format is Vicuna 1.1:
```
User: ...
Assistant: ...
``` |
ericpolewski/Marxish | ericpolewski | "2023-11-14T06:45:29Z" | 1,071 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-13T19:17:18Z" | ---
license: mit
---
This is a frankensteined Marx3b-v2 model (https://huggingface.co/acrastt/Marx-3B-V2)
It is approximately 7.1b. Just testing things out.
bfloat16
Used the same model 4 times. I fine-tuned it at r 256 but used the wrong alpaca format so I'll need to re-upload that merged model. |
TigerResearch/tigerbot-70b-base-v2 | TigerResearch | "2023-12-04T06:28:54Z" | 1,071 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-17T03:24:04Z" | ---
license: apache-2.0
language:
- zh
- en
---
<div style="width: 100%;">
<p align="center" width="20%">
<img src="http://x-pai.algolet.com/bot/img/logo_core.png" alt="TigerBot" width="20%", style="display: block; margin: auto;"></img>
</p>
</div>
<p align="center">
<font face="้ปไฝ" size=5"> A cutting-edge foundation for your very own LLM. </font>
</p>
<p align="center">
๐ป<a href="https://github.com/TigerResearch/TigerBot" target="_blank">Github</a> โข ๐ <a href="https://tigerbot.com/" target="_blank">TigerBot</a> โข ๐ค <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a>
</p>
# ๅฟซ้ๅผๅง
- ๆนๆณ1๏ผ้่ฟtransformersไฝฟ็จ
- ไธ่ฝฝ TigerBot Repo
```shell
git clone https://github.com/TigerResearch/TigerBot.git
```
- ๅฏๅจinferไปฃ็
```shell
python infer.py --model_path TigerResearch/tigerbot-70b-base-v2 --model_type base
```
- ๆนๆณ2:
- ไธ่ฝฝ TigerBot Repo
```shell
git clone https://github.com/TigerResearch/TigerBot.git
```
- ๅฎ่ฃ
git lfs๏ผ `git lfs install`
- ้่ฟhuggingfaceๆmodelscopeๅนณๅฐไธ่ฝฝๆ้
```shell
git clone https://huggingface.co/TigerResearch/tigerbot-70b-base-v2
git clone https://www.modelscope.cn/TigerResearch/tigerbot-70b-base-v2.git
```
- ๅฏๅจinferไปฃ็
```shell
python infer.py --model_path tigerbot-70b-base-v2 --model_type base
```
------
# Quick Start
- Method 1, use through transformers
- Clone TigerBot Repo
```shell
git clone https://github.com/TigerResearch/TigerBot.git
```
- Run infer script
```shell
python infer.py --model_path TigerResearch/tigerbot-70b-base --model_type base
```
- Method 2:
- Clone TigerBot Repo
```shell
git clone https://github.com/TigerResearch/TigerBot.git
```
- install git lfs๏ผ `git lfs install`
- Download weights from huggingface or modelscope
```shell
git clone https://huggingface.co/TigerResearch/tigerbot-70b-base-v2
git clone https://www.modelscope.cn/TigerResearch/tigerbot-70b-base-v2.git
```
- Run infer script
```shell
python infer.py --model_path tigerbot-70b-base-v2 --model_type base
``` |
bhenrym14/airoboros-3_1-yi-34b-200k | bhenrym14 | "2023-11-27T02:59:42Z" | 1,071 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-3.1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-27T02:03:00Z" | ---
license: other
license_name: yi-license
license_link: LICENSE
datasets:
- jondurbin/airoboros-3.1
---
# Instruction tune of Yi-34b-200k with Airoboros-3.1 (fp16)
## Overview
This is [larryvrh/Yi-34B-200K-Llamafied](https://huggingface.co/larryvrh/Yi-34B-200K-Llamafied), with instruction tuning performed with Jon Durbin's [jondurbin/airoboros-3.1](https://huggingface.co/datasets/jondurbin/airoboros-3.1) dataset. That base model is [01-ai/Yi-34B-200k](https://huggingface.co/01-ai/Yi-34B-200k), but using llama2 model definitions and tokenizer to remove any remote code requirements.
**This is a (merged) QLoRA fine-tune (rank 64)**.
The finetune was performed with 1x RTX 6000 Ada (~80 hours to this checkpoint). Prompts were truncated to 4096 tokens (for speed and VRAM headroom).
I have done very little testing with this model, so feedback on real world performance is appreciated!
## How to Use
Use as you would any other Hugging Face fp16 llama-2 model.
## Prompting:
Model was trained with llama-2 chat prompt format. See [jondurbin/airoboros-l2-13b-3.1.1](https://huggingface.co/jondurbin/airoboros-l2-13b-3.1.1) model card for details.
|
chargoddard/mixtralnt-4x7b-test | chargoddard | "2023-12-12T10:10:49Z" | 1,071 | 56 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-12T09:44:19Z" | ---
license: cc-by-nc-4.0
---
# Mixtraln't 4x7B
Oh boy, a new model architecture in Transformers! Time to do profane things with it.
What if instead of training a MoE from scratch, we took some pre-trained Mistral models and shoved them in a little clown car?
Uses parts from the following models:
* [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling)
* [NeverSleep/Noromaid-7b-v0.1.1](https://huggingface.co/NeverSleep/Noromaid-7b-v0.1.1)
* [teknium/Mistral-Trismegistus-7B](https://huggingface.co/teknium/Mistral-Trismegistus-7B)
* [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B)
* [PocketDoc/Dans-AdventurousWinds-Mk2-7b](https://huggingface.co/PocketDoc/Dans-AdventurousWinds-Mk2-7b)
Works and generates coherent text. The big question here is if the hack I used to populate the MoE gates works well enough to take advantage of all of the experts. Let's find out!
Prompt format: maybe alpaca??? or chatml??? life is full of mysteries
|
jan-hq/Phoenix-v1-8x7B | jan-hq | "2023-12-16T17:16:58Z" | 1,071 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-16T16:20:52Z" | ---
license: apache-2.0
language:
- en
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This is a highly experimental model for merging models into a MOE model.
- base model: [trinity-v1](https://huggingface.co/janhq/trinity-v1)
1. [trinity-v1](https://huggingface.co/janhq/trinity-v1): for General Chat
2. [Toppy-M-7B](https://huggingface.co/Undi95/Toppy-M-7B): for Role-playing
3. [CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf): for Coding
4. [Seraph-7B](https://huggingface.co/Weyaxi/Seraph-7B): for Writing
Special thanks to the interested work of [Chargoddard](https://huggingface.co/chargoddard) and [Undi95](https://huggingface.co/Undi95)
```yaml
Updating
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- ๐ป **100% offline on your machine**
: Your conversations remain confidential, and visible only to you.
- ๐๏ธ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- ๐ **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- ๐ **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Merger
This is a test project for merging models.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found here.
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | ?|
| ARC (25-shot) | ? |
| HellaSwag (10-shot) | ? |
| MMLU (5-shot) | ?|
| TruthfulQA (0-shot) | ? |
| Winogrande (5-shot) | ? |
| GSM8K (5-shot) | ? |
# Acknowlegement
- [mergekit]
(https://github.com/cg123/mergekit)
- [DARE](https://github.com/yule-BUAA/MergeLM/blob/main/README.md)
- [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) |
RichardErkhov/Vezora_-_Mistral-22B-v0.2-gguf | RichardErkhov | "2024-04-13T18:22:49Z" | 1,071 | 2 | null | [
"gguf",
"region:us"
] | null | "2024-04-13T13:03:28Z" | GGUF quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral-22B-v0.2 - GGUF
- Model creator: https://huggingface.co/Vezora/
- Original model: https://huggingface.co/Vezora/Mistral-22B-v0.2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mistral-22B-v0.2.Q2_K.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.Q2_K.gguf) | Q2_K | 7.7GB |
| [Mistral-22B-v0.2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.IQ3_XS.gguf) | IQ3_XS | 8.54GB |
| [Mistral-22B-v0.2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.IQ3_S.gguf) | IQ3_S | 9.02GB |
| [Mistral-22B-v0.2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.Q3_K_S.gguf) | Q3_K_S | 8.97GB |
| [Mistral-22B-v0.2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.IQ3_M.gguf) | IQ3_M | 9.37GB |
| [Mistral-22B-v0.2.Q3_K.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.Q3_K.gguf) | Q3_K | 10.01GB |
| [Mistral-22B-v0.2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.Q3_K_M.gguf) | Q3_K_M | 10.01GB |
| [Mistral-22B-v0.2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.Q3_K_L.gguf) | Q3_K_L | 10.92GB |
| [Mistral-22B-v0.2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.IQ4_XS.gguf) | IQ4_XS | 11.21GB |
| [Mistral-22B-v0.2.Q4_0.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.Q4_0.gguf) | Q4_0 | 11.7GB |
| [Mistral-22B-v0.2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.IQ4_NL.gguf) | IQ4_NL | 11.82GB |
| [Mistral-22B-v0.2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.Q4_K_S.gguf) | Q4_K_S | 11.78GB |
| [Mistral-22B-v0.2.Q4_K.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.Q4_K.gguf) | Q4_K | 12.42GB |
| [Mistral-22B-v0.2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.Q4_K_M.gguf) | Q4_K_M | 12.42GB |
| [Mistral-22B-v0.2.Q4_1.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.Q4_1.gguf) | Q4_1 | 12.98GB |
| [Mistral-22B-v0.2.Q5_0.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.Q5_0.gguf) | Q5_0 | 14.27GB |
| [Mistral-22B-v0.2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.Q5_K_S.gguf) | Q5_K_S | 14.27GB |
| [Mistral-22B-v0.2.Q5_K.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.Q5_K.gguf) | Q5_K | 14.64GB |
| [Mistral-22B-v0.2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.Q5_K_M.gguf) | Q5_K_M | 14.64GB |
| [Mistral-22B-v0.2.Q5_1.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.Q5_1.gguf) | Q5_1 | 15.55GB |
| [Mistral-22B-v0.2.Q6_K.gguf](https://huggingface.co/RichardErkhov/Mistral-22B-v0.2-gguf/blob/main/Mistral-22B-v0.2.Q6_K.gguf) | Q6_K | 16.99GB |
Original model description:
---
license: apache-2.0
---
<img src="https://huggingface.co/Vezora/Mistral-22B-v0.1/resolve/main/unsloth.png" width="100" height="150" />
### Mistral-22b-v.02 Release Announcement ๐
## This model is not an moe, it is infact a 22B parameter dense model!
**Date**: April 13
**Creator** [Nicolas Mejia-Petit](https://twitter.com/mejia_petit)
### Overview
- Just two days after our release of **Mistral-22b-v0.1**, we are excited to introduce our handcrafted experimental model, **Mistral-22b-v.02**. This model is a culmination of equal knowledge distilled from all experts into a single, dense 22b model. This model is not a single trained expert, rather its a compressed MOE model, turning it into a dense 22b mode. This is the first working MOE to Dense model conversion.
- v0.2 has trained on 8x more data than v0.1!
### Capabilities
- **Math Proficiency**: The model exhibits strong mathematical abilities. Dispite not being trained on math.
- **Better at Coding** The model is significantly better at coding, than V1, it passed some of my simple coding test, such as "Create a simple HTML site with a button that changes the background color to a random color", which V1 failed.
- **More Cohesive** This V2 model is significantly more cohesive, and better at aunderstanding the prompts and answering with the appopriate answer.
- **Highly Uncencored** Since this model was also Re-Alligned to be uncencored, it can just answer anything you ask. So use at your own risk, we take no responsibility for your generated responces.
- **Multi Turn** The dataset this model trained on was mostly all multi turn conversations, spanning many different topics, with some emphasis on coding.
- **Json Mode** I did train this model on answering in JSON and using JSON tools., I have yet to try it, in depth but preliminary test shows it works, including.
- **Agent abilities** I did train this model on agent datasets, that teach it to do real world tasks such as picking up an object, and even navigating a webpage based off HTML.
- **Good Chili Recipe** The model gives a good chili recipe :)
- **32k Sequence Length** This model was trained with a 32k sequence length.
### Experimental Nature
Please note that Mistral-22b is still in a WIP. v0.3 has started training now, with a different method than used before, this is to hopefully make the model more round in its internel knowlledge. Through my testing I found V2 to be a significant improvement over v.1.
### Upcoming Release: V.3
- v0.3 will feature a different base model for testing purposes, however this model is pretty darn good for a second test. :)
- I have done some preliminary results with my new v0.3 base model, and it appears to achieve a lower loss after the first epoch compared to the other base model used for v0.1 and v0.2. so we have started training v0.3 with the new base model and with the longer dataset, will be done and released in the next 48 hours. :)
### Stay Updated
**V.3**, coming soon! And is currently training, will be done in the next ~24 hours. ๐Paper Coming Soon๐
- There will be more of these 22b models. They 5-6 siblings till I find what the best results are for MOE compression.
- However I am very surprised at how good this V.2 model is, off my small testing.
### Usage:
- This model requires a specific chat template, as the training format was Guanaco this is what it looks like:
- "### System: You are a helpful assistant. ### Human###: Give me the best chili recipe you can ###Assistant: Here is the best chili recipe..."
## Thank you!
- Thank you to [Daniel Han](https://twitter.com/danielhanchen), for Unsloth AI which was used to train this model. this led to a 2-3x speed increae and 2-3x decrease in memmory consumption.
- Thank you to [Charles Coddard](https://twitter.com/chargoddard), for providng me with a script that was nessary to make this model.
- Thank you to Mistral, for releasing Another Wonderful open source model, under Apache 2.0.
- Thank you to [Tim Dettmers](https://twitter.com/Tim_Dettmers), for creating QLora
- Thank you to [Tri Dao](https://twitter.com/tri_dao), for creating Flash Attention
- Thank you to Microsoft, for the Lora paper, and the Slice-GPT paper.
- Thank you to the Hugging Face team, for everything.โค๏ธ We really do appreciate you guys and all your hard work and commitment to the open source community!โค๏ธ
- Thank you to [Jon Durbin](https://x.com/jon_durbin?s=21) I used one of his DPO datasets converted to SFT, more info will be explained in paper.
## Future plans, train 4-5 more of these experimental models gather preliminary testing results, and then run evaluations on all the models I see have the best possibilities of excelling, then use the best one.
|
rmdhirr/Anthesis_7B | rmdhirr | "2024-04-18T16:13:20Z" | 1,071 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"rmdhirr/Foxglove_7B",
"ResplendentAI/Paradigm_Shift_7B",
"merge",
"mergekit",
"roleplay",
"en",
"base_model:rmdhirr/Foxglove_7B",
"base_model:ResplendentAI/Paradigm_Shift_7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-04-17T02:51:21Z" | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- rmdhirr/Foxglove_7B
- ResplendentAI/Paradigm_Shift_7B
- merge
- mergekit
- roleplay
- mistral
base_model:
- rmdhirr/Foxglove_7B
- ResplendentAI/Paradigm_Shift_7B
model-index:
- name: Anthesis_7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.03
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Anthesis_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.2
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Anthesis_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Anthesis_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 68.65
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Anthesis_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Anthesis_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 42.99
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Anthesis_7B
name: Open LLM Leaderboard
---
# ๐ต๏ธ Anthesis_7B
Anthesis_7B is a merge of the following models:
* [rmdhirr/Foxglove_7B](https://huggingface.co/rmdhirr/Foxglove_7B)
* [ResplendentAI/Paradigm_Shift_7B](https://huggingface.co/ResplendentAI/Paradigm_Shift_7B)
# Formatting/Preset
Alpaca works best, but Mistral gives good outputs as well.
# Configuration
```yaml
models:
- model: ResplendentAI/Paradigm_Shift_7B
# no parameters necessary for base model
- model: rmdhirr/Foxglove_7B
parameters:
density: 0.60
weight: 0.40
merge_method: dare_ties
base_model: ResplendentAI/Paradigm_Shift_7B
parameters:
int8_mask: true
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_rmdhirr__Anthesis_7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.97|
|AI2 Reasoning Challenge (25-Shot)|69.03|
|HellaSwag (10-Shot) |86.20|
|MMLU (5-Shot) |62.06|
|TruthfulQA (0-shot) |68.65|
|Winogrande (5-shot) |78.93|
|GSM8k (5-shot) |42.99|
|
bartowski/falcon-11B-GGUF | bartowski | "2024-05-17T06:46:02Z" | 1,071 | 6 | null | [
"gguf",
"text-generation",
"en",
"de",
"es",
"fr",
"dataset:tiiuae/falcon-refinedweb",
"region:us"
] | text-generation | "2024-05-13T23:54:28Z" | ---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
- de
- es
- fr
inference: false
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of falcon-11B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2854">b2854</a> for quantization.
Original model: https://huggingface.co/tiiuae/falcon-11B
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
System: {system_prompt}
User:
{prompt}
Falcon:
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [falcon-11B-Q8_0.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-Q8_0.gguf) | Q8_0 | 11.80GB | Extremely high quality, generally unneeded but max available quant. |
| [falcon-11B-Q6_K.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-Q6_K.gguf) | Q6_K | 9.17GB | Very high quality, near perfect, *recommended*. |
| [falcon-11B-Q5_K_M.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-Q5_K_M.gguf) | Q5_K_M | 8.20GB | High quality, *recommended*. |
| [falcon-11B-Q5_K_S.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-Q5_K_S.gguf) | Q5_K_S | 7.73GB | High quality, *recommended*. |
| [falcon-11B-Q4_K_M.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-Q4_K_M.gguf) | Q4_K_M | 6.84GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [falcon-11B-Q4_K_S.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-Q4_K_S.gguf) | Q4_K_S | 6.38GB | Slightly lower quality with more space savings, *recommended*. |
| [falcon-11B-IQ4_NL.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-IQ4_NL.gguf) | IQ4_NL | 6.38GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [falcon-11B-IQ4_XS.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-IQ4_XS.gguf) | IQ4_XS | 6.04GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [falcon-11B-Q3_K_L.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-Q3_K_L.gguf) | Q3_K_L | 5.81GB | Lower quality but usable, good for low RAM availability. |
| [falcon-11B-Q3_K_M.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-Q3_K_M.gguf) | Q3_K_M | 5.43GB | Even lower quality. |
| [falcon-11B-IQ3_M.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-IQ3_M.gguf) | IQ3_M | 5.20GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [falcon-11B-IQ3_S.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-IQ3_S.gguf) | IQ3_S | 4.94GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [falcon-11B-Q3_K_S.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-Q3_K_S.gguf) | Q3_K_S | 4.94GB | Low quality, not recommended. |
| [falcon-11B-IQ3_XS.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-IQ3_XS.gguf) | IQ3_XS | 4.80GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [falcon-11B-IQ3_XXS.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-IQ3_XXS.gguf) | IQ3_XXS | 4.44GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [falcon-11B-Q2_K.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-Q2_K.gguf) | Q2_K | 4.25GB | Very low quality but surprisingly usable. |
| [falcon-11B-IQ2_M.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-IQ2_M.gguf) | IQ2_M | 3.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [falcon-11B-IQ2_S.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-IQ2_S.gguf) | IQ2_S | 3.66GB | Very low quality, uses SOTA techniques to be usable. |
| [falcon-11B-IQ2_XS.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-IQ2_XS.gguf) | IQ2_XS | 3.44GB | Very low quality, uses SOTA techniques to be usable. |
| [falcon-11B-IQ2_XXS.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-IQ2_XXS.gguf) | IQ2_XXS | 3.13GB | Lower quality, uses SOTA techniques to be usable. |
| [falcon-11B-IQ1_M.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-IQ1_M.gguf) | IQ1_M | 2.77GB | Extremely low quality, *not* recommended. |
| [falcon-11B-IQ1_S.gguf](https://huggingface.co/bartowski/falcon-11B-GGUF/blob/main/falcon-11B-IQ1_S.gguf) | IQ1_S | 2.56GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/falcon-11B-GGUF --include "falcon-11B-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/falcon-11B-GGUF --include "falcon-11B-Q8_0.gguf/*" --local-dir falcon-11B-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (falcon-11B-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
werty1248/Llama-3-Ko-8B-Instruct-AOG | werty1248 | "2024-05-22T08:45:38Z" | 1,071 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"dataset:kyujinpy/OpenOrca-KO",
"dataset:nlpai-lab/openassistant-guanaco-ko",
"base_model:beomi/Llama-3-Open-Ko-8B",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-21T14:24:34Z" | ---
library_name: transformers
license: llama3
base_model: beomi/Llama-3-Open-Ko-8B
datasets:
- beomi/KoAlpaca-v1.1a
- kyujinpy/OpenOrca-KO
- nlpai-lab/openassistant-guanaco-ko
language:
- ko
pipeline_tag: text-generation
---
# Model Card
<!-- Provide a quick summary of what the model is/does. -->
~์์ค์ง~
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
[beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) (์ต์ ๋ฒ์ )์ Instruction tuning ๋ฒ์ ๋ชจ๋ธ์
๋๋ค.
- Dataset:
- [beomi/Ko**A**lpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a)
- [kyujinpy/Open**O**rca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO)
- [nlpai-lab/openassistant-**g**uanaco-ko](https://huggingface.co/datasets/nlpai-lab/openassistant-guanaco-ko)
- Instruction format:
- alpaca
### Dataset
- ์์
์ ์ผ๋ก ์ด์ฉ ๊ฐ๋ฅํ ๋ฐ์ดํฐ ์
์ ์ฌ์ฉํ์์ต๋๋ค.
- ํฅํ ๋น๊ต๋ฅผ ์ํ์ฌ, ์ํ/์ฝ๋ฉ ๊ด๋ จ ์ง๋ฌธ์ด ๋ง์ ๋ฐ์ดํฐ ์
(ex: kyujinpy/KOpen-platypus)์ ์ ์ธํ์์ต๋๋ค.
- ๋ฉํฐ ํด ๋ํ ๋ฐ์ดํฐ(nlpai-lab/openassistant-guanaco-ko)๋ฅผ ์ถ๊ฐํด๋ณด์์ต๋๋ค.
- beomi/KoAlpaca-v1.1a
- 80% ํ๋ฅ ๋ก instruction ์ถ๊ฐ("๋น์ ์ ์ธ๊ณต์ง๋ฅ ๋น์์
๋๋ค. ์ฌ์ฉ์๊ฐ ๋ต์ ์ดํดํ๊ธฐ ์ํด ์ธ๋ถ์์ ๊ฒ์ํ ํ์๊ฐ ์๋๋ก ์์ธํ ๋ต๋ณ์ ์ ๊ณตํ์ธ์.")
- kyujinpy/OpenOrca-KO
- ๋ณ๊ฒฝ์ฌํญ ์์
- nlpai-lab/openassistant-guanaco-ko
- ๋ฉํฐ ํด ๋ํ์ ๊ฒฝ์ฐ, ๋ง์ง๋ง Assistant์ ๋ต๋ณ์ ๋ชฉํ๋ก ํจ
- Alpaca format์ ๋ง์ถฐ ์ด์ ๋ํ ๋ด์ฉ์ ๋ํ ์ ์ฒ๋ฆฌ ์ํ
- 80% ํ๋ฅ ๋ก instruction ์ถ๊ฐ ("๋น์ ์ ์ธ๊ณต์ง๋ฅ ๋น์์
๋๋ค. ์ฌ์ฉ์๊ฐ ๋ต์ ์ดํดํ๊ธฐ ์ํด ์ธ๋ถ์์ ๊ฒ์ํ ํ์๊ฐ ์๋๋ก ์์ธํ ๋ต๋ณ์ ์ ๊ณตํ์ธ์.")
### Training details
Training: Axolotl์ ์ด์ฉํด LoRA๋ก 3epoch ํ์ต ์์ผฐ์ต๋๋ค.
- lora_r: 32
- lora_alpha: 32
- lora_dropout: 0.05
- gradient_accumulation_steps: 8
- micro_batch_size: 4
- num_epochs: 3
- learning_rate: 0.0002 (2e-4)
- lr_scheduler: cosine
- warmup_steps: 50
- sequence_len: 4096
- bf16
ํ์ต ์๊ฐ: 1xA100, ์ฝ 8์๊ฐ
### Evaluation
- boolq๋ฅผ ์ ์ธํ๋ฉด ํฐ ์ ํ๋ ํฅ์์ ์์์ต๋๋ค.
- 5shot kobest (Accuracy)
| Tasks |[werty1248/Llama-3-Ko-8B-OpenOrca](https://huggingface.co/werty1248/Llama-3-Ko-8B-OpenOrca)|[beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B)|[werty1248/Llama-3-Ko-8B-Instruct-AOG](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-AOG)|
|----------------|------:|------:|------:|
|kobest_boolq |0.7158ยฑ0.0120|0.7963ยฑ0.0108|0.8312ยฑ0.0100|
|kobest_copa |0.7620ยฑ0.0135|0.8110ยฑ0.0124|0.8120ยฑ0.0124|
|kobest_hellaswag|0.4740ยฑ0.0224|0.4780ยฑ0.0224|0.4700ยฑ0.0223|
|kobest_sentineg |0.9471ยฑ0.0112|0.9622ยฑ0.0096|0.9647ยฑ0.0093|
|kobest_wic |0.6079ยฑ0.0138|0.5778ยฑ0.0139|0.5937ยฑ0.0138|
### Format & Examples
- Instruction - Input - Response ํํ๊ฐ ๊ถ์ฅ๋์ง๋ง, ๊ฒฝํ์ ์ผ๋ก Question - Instruction - Response๋ก ์ฌ์ฉํ๋ ๊ฒ์ด ๋ ๋์ ๊ฒ ๊ฐ์ต๋๋ค.
- ์ ๋ ๋ต๋ณ ์์
```
input = """### Question:
ํผ๋ณด๋์น ์์ด์ด ๋ญ์ผ? ๊ทธ๋ฆฌ๊ณ ํผ๋ณด๋์น ์์ด์ ๋ํด ํ์ด์ฌ ์ฝ๋๋ฅผ ์ง์ค๋ณผ๋?
### Instruction:
์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํ์.
### Response:
"""
ํผ๋ณด๋์น ์์ด์ 1์ฐจ์ 2์ฐจ ์๋ฅผ ์ฌ์ฉํด ๋ค์ ์๋ฅผ ๊ตฌํ๋ ๋ฐฉ์์ผ๋ก ๋ง๋ค์ด์ง๋ ์์ด์
๋๋ค. ์๋ฅผ๋ค๋ฉด ํผ๋ณด๋์น ์์ด์ ์ฒ์ 7๊ฐ ์์๋ 1, 1, 2, 3, 5, 8, 13์
๋๋ค. ํผ๋ณด๋์น ์์ด์ ๊ฐ ์์น์์ 2๊ฐ ์ด์ ์์น์ ์๋ฅผ ๋ํ์ฌ ์๋ก์ด ์์๊ฐ ๊ณ์ฐ๋ฉ๋๋ค.
Python ์ฝ๋๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
def fibo(n):
if n<2:
return n
else:
return fibo(n-1)+fibo(n-2)
์ด๋ ๊ฒ ์์ฑํจ์ผ๋ก์จ ํผ๋ณด๋์น ์์ด์ ์์น 0๋ถํฐ n๊น์ง ๊ฐ ์์๊ฐ ๊ณ์ฐ๋ฉ๋๋ค. 6์ด๋ผ๋ ์์น๋ก ๊ณ์ฐํ๋ ๊ฒฝ์ฐ ๋ค์๊ณผ ๊ฐ์ ์ถ๋ ฅ์ด ๋ํ๋ฉ๋๋ค:
print(fibo(6))
fibonacci(6)==> 8<|end_of_text|>
```
- ์๋ชป๋ ๋ต๋ณ(ํ ๋ฃจ์๋ค์ด์
) ์์
```
input = """### Question:
์ธ์ข
๋์์ ์งํ์ ๋งฅ๋ถ ๋์ง ์ฌ๊ฑด์ ๋ํด์ ์๊ณ ์์ด?
### Instruction:
์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํ์.
### Response:
"""
๋ค, 2007๋
์ ์ธ์ข
๋์์ ์งํ์ ๋งฅ๋ถ ๋์ง ์ฌ๊ฑด์ด ์ผ์ด๋ฌ์ต๋๋ค. ๋ํ ์๊ฐ์ธ ์ด์ฃผ์์จ๊ฐ ๊น์ํฌ ์๊ฐ์ ์ฑ
์ ์งํํ๋ฉด์, ์ฑ
๋ด์ฉ์ ์๋ ์์๋ ์ธ์ข
๋์์ด ๋งฅ๋ถ์ ๋์ก๋ค๋ ๋ด์ฉ์ ์ ์ฌ๋ก ํด์ ์ถํํ์๊ณ , ์ด์ ๊น์ํฌ ์๊ฐ๋ ์งํ์ ์ด๋ ์ด๋ฆ์ผ๋ก ์ด์ฃผ์ ์๊ฐ์๊ฒ ๋ฒ์ ์์ก์ ๊ฑธ๊ฒ ๋์์ต๋๋ค. ๊ทธ๋ฌ๋ ๊ฒฐ๋ก ์ ์ผ๋ก, ๊น์ํฌ ์๊ฐ์ ์งํ์ ์ด๋ ๋จ์ด๋ ํน์ ๋จ์ฒด๋ ์ฅ์๋ง์ ๋ช
๋ฐฑํ๊ฒ ์ฌ์ฉํ๋ 'ํธ์นญ'์ ์ฑ๊ฒฉ๊ณผ ํจ๊ป ๋ค์์ ์งํ์ ์ฌ๋๋ค์ด ๊ณต๋ ์ฌ์ฉํ๋ 'ํ์ค'์ ์ฑ๊ฒฉ์ ๊ฐ์ก์ผ๋ฏ๋ก, ์ด์ฃผ์ ์๊ฐ์ ์ถํ ํ์๊ฐ ๋ช
์ํผ์์ ํด๋น๋๋์ง ์ฌ๋ถ๋ฅผ ํ๋จํ ๋์๋ ์ถํ ๋น์ ์งํ์ ์ด ์ผ๋ฐ์ ์ผ๋ก ์ด๋ค ์๋ฏธ๋ฅผ ๊ฐ์ง๊ณ ์์๋์ง๊ฐ ์ค์ํ ์์๋ก ๊ณ ๋ ค๋์์ต๋๋ค. ๋ฐ๋ผ์ ๋จ์ํ ์งํ์ ์ด ๊ฐ๊ณ์ ํน์ ๋จ์ฒด ๋ฐ ์ฅ์์ ๊ด๋ จ๋ ํธ์นญ์ผ๋ก๋ง ์ฌ์ฉ๋์ด์๋ค๋ฉด, ์ถํ ํ์๋ก ์ธํด ๋ช
์ํผ์์ ๋ฐ์๋ค๋ ๊น์ํฌ ์๊ฐ์ ์ฃผ์ฅ์ ๋ฐ์๋ค์ฌ์ง์ง ์์์ ์๋ ์์ต๋๋ค.<|end_of_text|>
```
- ๋ฉํฐํด ๋ํ ์์ (์ ๋ขฐ๋ ๋ฎ์)
```
"""### Question:
ํผ๋ณด๋์น ์์ด์ด ๋ญ์ผ? ๊ทธ๋ฆฌ๊ณ ํผ๋ณด๋์น ์์ด์ ๋ํด ํ์ด์ฌ ์ฝ๋๋ฅผ ์ง์ค๋ณผ๋?
### Response:
ํผ๋ณด๋์น ์์ด์ 1์ฐจ์ 2์ฐจ ์๋ฅผ ์ฌ์ฉํด ๋ค์ ์๋ฅผ ๊ตฌํ๋ ๋ฐฉ์์ผ๋ก ๋ง๋ค์ด์ง๋ ์์ด์
๋๋ค. ์๋ฅผ๋ค๋ฉด ํผ๋ณด๋์น ์์ด์ ์ฒ์ 7๊ฐ ์์๋ 1, 1, 2, 3, 5, 8, 13์
๋๋ค. ํผ๋ณด๋์น ์์ด์ ๊ฐ ์์น์์ 2๊ฐ ์ด์ ์์น์ ์๋ฅผ ๋ํ์ฌ ์๋ก์ด ์์๊ฐ ๊ณ์ฐ๋ฉ๋๋ค.
Python ์ฝ๋๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.
def fibo(n):
if n<2:
return n
else:
return fibo(n-1)+fibo(n-2)
์ด๋ ๊ฒ ์์ฑํจ์ผ๋ก์จ ํผ๋ณด๋์น ์์ด์ ์์น 0๋ถํฐ n๊น์ง ๊ฐ ์์๊ฐ ๊ณ์ฐ๋ฉ๋๋ค. 6์ด๋ผ๋ ์์น๋ก ๊ณ์ฐํ๋ ๊ฒฝ์ฐ ๋ค์๊ณผ ๊ฐ์ ์ถ๋ ฅ์ด ๋ํ๋ฉ๋๋ค:
print(fibo(6))
fibonacci(6)==> 8
### Question:
๋ ๋น ๋ฅธ ์๊ณ ๋ฆฌ์ฆ์ ์์๊น?
### Instruction:
์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํ์.
### Response:
"""
๋ค๋ฅธ Python ์ฝ๋๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:
def fibo(n):
a, b = 0, 1
for _ in range(n):
a, b = b, a+b
return a
์ด ์ฝ๋๋ ๋ ๊ฐ์ ๋ก์ปฌ ๋ณ์ a ๋ฐ b๋ฅผ ์ฌ์ฉํ๋ ์ดํฐ๋ ์ดํฐ๋ฅผ ์ฌ์ฉํ์ฌ ๊ณ์ฐ์ ์ํํฉ๋๋ค. ์ด๊ธฐํ ์ a๋ ํผ๋ณด๋์น ์์ด์ ์์น 0์ ๊ฐ์ผ๋ก ์ค์ ๋๊ณ , b๋ ์ด์ ๊ณ์ฐ์์ ํผ๋ด ๊ณ์ด์ ์์น 0์ ๊ฐ(์์น 0์ ํฌํจํ๋ฉด 1๋จ๊ณ์์ 1์ด๋ฉฐ, ์์น 1์ ํฌํจํ๋ฉด 2๋จ๊ณ์์ 1)์ผ๋ก ์ค์ ๋ฉ๋๋ค. ๋ค์ ๊ฐ ๋ฐ๋ณต์์ a๋ ์ด์ ๋ก์ปฌ ๋ณ์ b์ ๊ฐ๊ณผ ์๋ก์ด ๊ฐ์ผ๋ก ๊ฐ์ด ๋ฐ๋๋๋ค. ๋ ๋ฒ์งธ ํ๋ผ๋ฏธํฐ __๋ ๊ฐ ๋ฐ๋ณต์์ ๊ฐ์ด ์ถ๋ ฅ๋์ง ์๋๋ค๋(๊ทธ๋ฌ๋ ์ฌ์ ํ ์ฌ์ฉ๋ฉ๋๋ค) ๊ฒ์ ๋ฐ์ํ๊ธฐ ์ํด ์ ํ๋์์ต๋๋ค.<|end_of_text|>
```
### License:
[https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) |
Monero/Guanaco-SuperCOT-30b-GPTQ-4bit | Monero | "2023-05-25T08:31:44Z" | 1,070 | 10 | transformers | [
"transformers",
"llama",
"text-generation",
"dataset:kaiokendev/SuperCOT-dataset",
"dataset:neulab/conala",
"dataset:yahma/alpaca-cleaned",
"dataset:QingyiSi/Alpaca-CoT",
"dataset:timdettmers/guanaco-33b",
"dataset:JosephusCheung/GuanacoDataset",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-05-25T02:45:43Z" | ---
license: mit
datasets:
- kaiokendev/SuperCOT-dataset
- neulab/conala
- yahma/alpaca-cleaned
- QingyiSi/Alpaca-CoT
- timdettmers/guanaco-33b
- JosephusCheung/GuanacoDataset
---
<center><h1><b>Guanaco SuperCOT</b> - Generative Universal Assistant for Natural-language Adaptive Context-aware Omnilingual outputs</h1></center>
<html>
<head>
<style>
table {
border:1px solid #b3adad;
border-collapse:collapse;
padding:5px;
}
table th {
border:1px solid #b3adad;
padding:5px;
background: #f0f0f0;
color: #313030;
}
table td {
border:1px solid #b3adad;
text-align:center;
padding:5px;
background: #ffffff;
color: #313030;
}
</style>
</head>
<body>
<table>
<thead>
<tr>
<th>Model:</th>
<th>Wikitext2</th>
<th>Ptb-New</th>
<th>C4-New</th>
</tr>
</thead>
<tbody>
<tr>
<td>Guanaco SuperCOT 4bit TS 128g</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</body>
</html>
### Guanaco SuperCOT
Guanaco SuperCOT is trained with the aim of making LLaMa follow prompts for Langchain better, by infusing chain-of-thought datasets, code explanations and instructions, snippets, logical deductions and Alpaca GPT-4 prompts. It's also an advanced instruction-following language model built on Meta's LLaMA 33B model. Expanding upon the initial 52K dataset from the Alpaca model, an additional 534,530 entries have been incorporated, covering English, Simplified Chinese, Traditional Chinese (Taiwan), Traditional Chinese (Hong Kong), Japanese, Deutsch, and various linguistic and grammatical tasks. This wealth of data enables Guanaco to perform exceptionally well in multilingual environments.
It uses a mixture of the following datasets:
[https://huggingface.co/datasets/QingyiSi/Alpaca-CoT](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
- Chain of thought QED
- Chain of thought Aqua
- CodeAlpaca
[https://huggingface.co/datasets/neulab/conala](https://huggingface.co/datasets/neulab/conala)
- Code snippets
[https://huggingface.co/datasets/yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned)
- Alpaca GPT4
- [https://huggingface.co/datasets/JosephusCheung/GuanacoDataset](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)
- Guacano
- [https://huggingface.co/timdettmers/guanaco-33b](https://huggingface.co/timdettmers/guanaco-33b)
- Guacano 33b LoRa
1\. Prompting
-------------------------
You should prompt the LoRA the same way you would prompt Alpaca or Alpacino.
The new format is designed to be similar to ChatGPT, allowing for better integration with the Alpaca format and enhancing the overall user experience.
Instruction is utilized as a few-shot context to support diverse inputs and responses, making it easier for the model to understand and provide accurate responses to user queries.
The format is as follows:
```
### Instruction:
User: History User Input
Assistant: History Assistant Answer
### Input:
System: Knowledge
User: New User Input
### Response:
New Assistant Answer
```
This structured format allows for easier tracking of the conversation history and maintaining context throughout a multi-turn dialogue.
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
<instruction>
### Input:
<any additional context. Remove this if it's not neccesary>
### Response:
<make sure to leave a single new-line here for optimal results>
```
Remember that with lower parameter sizes, the structure of the prompt becomes more important. The same prompt worded differently can give wildly different answers. Consider using the following suggestion suffixes to improve output quality:
- "Think through this step by step"
- "Let's think about this logically"
- "Explain your reasoning"
- "Provide details to support your answer"
- "Compare and contrast your answer with alternatives"
2\. Role-playing support:
-------------------------
Guanaco now offers advanced role-playing support, similar to Character.AI, in English, Simplified Chinese, Traditional Chinese, Japanese, and Deutsch, making it more versatile for users from different linguistic backgrounds.
Users can instruct the model to assume specific roles, historical figures, or fictional characters, as well as personalities based on their input. This allows for more engaging and immersive conversations.
The model can use various sources of information to provide knowledge and context for the character's background and behavior, such as encyclopedic entries, first-person narrations, or a list of personality traits.
The model will consistently output responses in the format "Character Name: Reply" to maintain the chosen role throughout the conversation, enhancing the user's experience.
3\. Continuation of responses for ongoing topics:
-------------------------------------------------
The Guanaco model can now continue answering questions or discussing topics upon the user's request, making it more adaptable and better suited for extended conversations.
The contextual structure consisting of System, Assistant, and User roles allows the model to engage in multi-turn dialogues, maintain context-aware conversations, and provide more coherent responses.
The model can now accommodate role specification and character settings, providing a more immersive and tailored conversational experience based on the user's preferences.
It is important to remember that Guanaco is a 33B-parameter model, and any knowledge-based content should be considered potentially inaccurate. We strongly recommend providing verifiable sources, such as Wikipedia, for knowledge-based answers. In the absence of sources, it is crucial to inform users of this limitation to prevent the dissemination of false information and to maintain transparency.
### Citations
Alpaca COT datasets
```
@misc{alpaca-cot,
author = {Qingyi Si, Zheng Lin },
school = {Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China},
title = {Alpaca-CoT: An Instruction Fine-Tuning Platform with Instruction Data Collection and Unified Large Language Models Interface},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/PhoebusSi/alpaca-CoT}},
}
```
Stanford Alpaca
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
Google FLAN
```
@inproceedings{weifinetuned,
title={Finetuned Language Models are Zero-Shot Learners},
author={Wei, Jason and Bosma, Maarten and Zhao, Vincent and Guu, Kelvin and Yu, Adams Wei and Lester, Brian and Du, Nan and Dai, Andrew M and Le, Quoc V},
booktitle={International Conference on Learning Representations}
}
``` |
timm/efficientvit_b3.r224_in1k | timm | "2023-11-21T21:40:11Z" | 1,070 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2205.14756",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-08-18T22:46:38Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for efficientvit_b3.r224_in1k
An EfficientViT (MIT) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 48.6
- GMACs: 4.0
- Activations (M): 26.9
- Image size: 224 x 224
- **Papers:**
- EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction: https://arxiv.org/abs/2205.14756
- **Original:** https://github.com/mit-han-lab/efficientvit
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('efficientvit_b3.r224_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientvit_b3.r224_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 56, 56])
# torch.Size([1, 128, 28, 28])
# torch.Size([1, 256, 14, 14])
# torch.Size([1, 512, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientvit_b3.r224_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 512, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@article{cai2022efficientvit,
title={EfficientViT: Enhanced linear attention for high-resolution low-computation visual recognition},
author={Cai, Han and Gan, Chuang and Han, Song},
journal={arXiv preprint arXiv:2205.14756},
year={2022}
}
```
|
quantumaikr/falcon-180B-chat-instruct | quantumaikr | "2023-09-07T11:18:49Z" | 1,070 | 1 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"en",
"de",
"es",
"fr",
"dataset:tiiuae/falcon-refinedweb",
"dataset:nRuaif/wizard_alpaca_dolly_orca",
"license:unknown",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-07T07:43:37Z" | ---
datasets:
- tiiuae/falcon-refinedweb
- nRuaif/wizard_alpaca_dolly_orca
language:
- en
- de
- es
- fr
inference: false
license: unknown
---
# ๐ฐ๐ท quantumaikr/falcon-180B-chat-instruct
**quantumaikr/falcon-180B-chat-instruct is a 180B parameters causal decoder-only model built by [quantumaikr](https://www.quantumai.kr) based on [Falcon-180B-chat](https://huggingface.co/tiiuae/falcon-180B-chat)**
## How to Get Started with the Model
To run inference with the model in full `bfloat16` precision you need approximately 8xA100 80GB or equivalent.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "quantumaikr/falcon-180B-chat-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Contact
๐ฐ๐ท www.quantumai.kr
๐ฐ๐ท [email protected] [์ด๊ฑฐ๋์ธ์ด๋ชจ๋ธ ๊ธฐ์ ๋์
๋ฌธ์ํ์] |
Sao10K/HACM7-Mistral-7B | Sao10K | "2023-10-22T12:06:38Z" | 1,070 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:teknium/openhermes",
"dataset:jondurbin/airoboros-2.2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-17T03:46:24Z" | ---
license: apache-2.0
language:
- en
datasets:
- teknium/openhermes
- jondurbin/airoboros-2.2
---
An attempt at a Generalist Model, using chunks spliced from the Airoboros and OpenHermes dataset, and some of my own stuff which I have since discarded (chat stuff, basic shit)forgot i added CollectiveCognition to the mix.
Base Model used is a gradient merge between OpenHermes & Airoboros 2.2, lora trained on the two spliced datasets and my own one, which was reapplied to them (dumb move I know)
Tests:
Uncensored, not bad at RP I Guess, it felt decent.
Good? who knows, tbh. try it urself or smth |
gradientputri/chattychatchat-13b-l2-fp16 | gradientputri | "2023-10-23T08:54:17Z" | 1,070 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-22T17:46:33Z" | ---
license: llama2
language:
- en
---
Yo! This is my tune / train of Llama2-chat w LoRAs applied after! I think... ???
SuperCOT and LimaRPv3 were added in the end to help. I think they were weighted but me does not remember, sorry :(
Maybe use Alpaca? Are they camels or sheep? :>
Feels pretty nice to me :)
Have fun! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.