modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
Sao10K/Zephyrus-L1-33B | Sao10K | 2023-11-18T04:34:17Z | 1,635 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-25T21:56:56Z | ---
language:
- en
license: other
---

some quants I guess: https://huggingface.co/Sao10K/Zephyrus-L1-33B-GGUF
Zephyrus v1 - A multi-merging of several Llama1 30B Models using newer Merging Methods seen after the release of Llama2, with a sprinking of an experimental LoRA on top.
The goal is to improve Writing Quality to match those seen in top L2 13B Models, while keeping the improved Logic and Spatial Awareness a 30B Model has. As usual, this is a Roleplay-focused Model, and I cannot test or verify its effectiveness as an Assistant or Tool.
Did I succeed? Partially. It's not the best partner for chatting, but I love it for storywriting and as my writing companion. It's fairly smart and spatially aware, and I've noticed no glaring issues so far.
While it may appear censored with zero input prompts, in actual Roleplay with a Character in SillyTavern, there are no issues even with NSFL topics as long as minimal context is there.
If you face any impersonation / dumb moments, a simple swipe or two fixes things.
It has its issues at times, yes, but this is my... first successful attempt. I'll try to work on more in the future.
SillyTavern Formats: simple-proxy-for-tavern in ST for Instruct Prompt, change to Default for Context Template.
<br>Ooba Presets I'd recommend are Kobold Godlike, NovelAI Best Guess, simple-proxy-for-tavern or Shortwave with 1.22 Temperature, from My Testing. Test them out, different RPs work best with different presets.
**REMEMBER TO SET CONTEXT AT 2048 OR IT WILL BREAK. THIS IS A LLAMA1 MODEL AFTER ALL.**
Most formats could work, but Alpaca works the best. Use simple proxy instead, works much better.
```
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
Support me [here](https://ko-fi.com/sao10k) :)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Zephyrus-L1-33B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 55.79 |
| ARC (25-shot) | 64.51 |
| HellaSwag (10-shot) | 84.15 |
| MMLU (5-shot) | 57.37 |
| TruthfulQA (0-shot) | 53.87 |
| Winogrande (5-shot) | 80.19 |
| GSM8K (5-shot) | 23.58 |
| DROP (3-shot) | 26.89 |
|
Sao10K/BrainDerp2 | Sao10K | 2023-11-18T04:33:26Z | 1,635 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-29T15:26:57Z | ---
license: llama2
language:
- en
---
'We have EnSEmblEd toP RAnKeRs tO mAKE oUr mODeL.'
'This MoDeL hAS BeEN trAINED oN orCa-StyLe DatAsEtS.'
'***trAINed***' KEK
'We hAVE AchIEvEd TOp RAnKeR In thE leAdErBOArDs.'
Lmao its all bs, you're all running merge scripts like we do, literally trying to game the leaderboards huh? Pathetic lol. Literally 0 card info, copy pasted from llama2 base models, with no other goals than going for leaderboards.
Atleast merge with a goal, like come on. I'm going for RP, Undi's going for RP. Atleast make a believable goal.
Brainderp is exposing the frauds who lie about it lol. Atleast be honest if you're merging like us poor people, which I appreciate some models do.
for BrainDerp, I have 'ensembled' several random models in the leaderboard to create this model.
i can easily steal the top leaderboard spots but meh not worth the effort.
that's it. i didn't bother testing much. ymmv.
<img src="https://blog.cdn.own3d.tv/resize=fit:crop,height:400,width:600/tbv2RYWpReqNtof2dD0U" style="width: 70%; min-width: 300px; display: block; margin: auto;">
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__BrainDerp2)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 51.18 |
| ARC (25-shot) | 60.92 |
| HellaSwag (10-shot) | 81.94 |
| MMLU (5-shot) | 58.9 |
| TruthfulQA (0-shot) | 57.19 |
| Winogrande (5-shot) | 75.93 |
| GSM8K (5-shot) | 9.02 |
| DROP (3-shot) | 14.34 |
|
wei123602/Llama-2-13b-FINETUNE4_TEST3 | wei123602 | 2023-09-30T03:02:00Z | 1,635 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-30T02:38:57Z | Entry not found |
migtissera/Tess-M-v1.1 | migtissera | 2023-11-24T18:49:17Z | 1,635 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-22T03:23:34Z | ---
license: other
license_name: yi-34b
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
---
# Note:
This version is experimental and have been depracated. Please use the stable release Tess-M-v1.3: https://huggingface.co/migtissera/Tess-M-v1.3
# Tess

Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-M-v1.1 was trained on the Yi-34B-200K base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
|
digiplay/majicMIX_lux_v3 | digiplay | 2024-03-12T20:47:33Z | 1,635 | 3 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-12T20:39:49Z | ---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/56967/majicmix-lux
https://civitai.com/models/56967?modelVersionId=286238
|
brazilianslib/Llama-3-8B-Dolphin-Portuguese-v0.3-Q8_0-GGUF | brazilianslib | 2024-06-27T15:27:39Z | 1,635 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"pt",
"dataset:adalbertojunior/dolphin_portuguese_legal",
"base_model:adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-06-27T15:27:01Z | ---
base_model: adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3
datasets:
- adalbertojunior/dolphin_portuguese_legal
language:
- pt
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
model-index:
- name: Llama-3-8B-Dolphin-Portuguese-v0.3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 68.86
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 57.86
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 61.91
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 93.05
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 76.48
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 76.78
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 83.25
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 68.85
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia/tweetsentbr_fewshot
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 71.3
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3
name: Open Portuguese LLM Leaderboard
---
# brazilianslib/Llama-3-8B-Dolphin-Portuguese-v0.3-Q8_0-GGUF
This model was converted to GGUF format from [`adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3`](https://huggingface.co/adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo brazilianslib/Llama-3-8B-Dolphin-Portuguese-v0.3-Q8_0-GGUF --hf-file llama-3-8b-dolphin-portuguese-v0.3-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo brazilianslib/Llama-3-8B-Dolphin-Portuguese-v0.3-Q8_0-GGUF --hf-file llama-3-8b-dolphin-portuguese-v0.3-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo brazilianslib/Llama-3-8B-Dolphin-Portuguese-v0.3-Q8_0-GGUF --hf-file llama-3-8b-dolphin-portuguese-v0.3-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo brazilianslib/Llama-3-8B-Dolphin-Portuguese-v0.3-Q8_0-GGUF --hf-file llama-3-8b-dolphin-portuguese-v0.3-q8_0.gguf -c 2048
```
|
togethercomputer/GPT-JT-6B-v0 | togethercomputer | 2022-11-24T09:07:01Z | 1,634 | 2 | transformers | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"gpt",
"en",
"dataset:natural_instructions",
"dataset:the_pile",
"dataset:cot",
"dataset:Muennighoff/P3",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-11-22T12:26:50Z | ---
language:
- en
datasets:
- natural_instructions
- the_pile
- cot
- Muennighoff/P3
tags:
- gpt
pipeline_tag: text-generation
inference:
parameters:
temperature: 0.1
widget:
- text: "Is this review positive or negative? Review: Best cast iron skillet you will ever buy. Answer:"
example_title: "Sentiment analysis"
- text: "Where is Zurich? Ans:"
example_title: "Question Answering"
---
# Quick Start
```python
from transformers import pipeline
pipe = pipeline(model='togethercomputer/GPT-JT-6B-v0')
pipe("Where is Zurich? Ans:")
``` |
kevinpro/Vicuna-13B-CoT | kevinpro | 2023-06-04T08:05:43Z | 1,634 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"code",
"en",
"dataset:QingyiSi/Alpaca-CoT",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-01T03:12:22Z | ---
datasets:
- QingyiSi/Alpaca-CoT
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- code
---
# Model Card for Model ID
SFT to enhance the CoT capabiliy of Vicuna
If you find the model helpful, please click "like" to support us.
We also welcome feedback on your usage experience and any issues you encounter in the issues section.
Another 7B version: https://huggingface.co/kevinpro/Vicuna-7B-CoT
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Fredithefish/RedPajama-INCITE-Chat-3B-ShareGPT-11K | Fredithefish | 2023-06-06T16:36:59Z | 1,634 | 2 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"en",
"dataset:Fredithefish/ShareGPT-Unfiltered-RedPajama-Chat-format",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-06T16:24:52Z | ---
license: apache-2.0
datasets:
- Fredithefish/ShareGPT-Unfiltered-RedPajama-Chat-format
language:
- en
inference: false
---
### Model Description
<a href="https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1">RedPajama-INCITE-Chat-3B-v1</a> finetuned on 11K pairs from unfiltered ShareGPT dataset
- **Model type:** language model
- **Language(s) (NLP):** Engish
- **License:** apache-2.0
- **Finetuned from model:** RedPajama-INCITE-Chat-3B |
CobraMamba/mamba-gpt-3b | CobraMamba | 2023-07-28T06:42:23Z | 1,634 | 4 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-12T06:08:57Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
inference: false
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
---
# Model Card
## Github
https://github.com/chi2liu/mamba-gpt-3b
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 25.3 |
| ARC (25-shot) | 40.5 |
| HellaSwag (10-shot) | 64.9 |
| TruthfulQA (0-shot) | 37.1 |
| Avg. | 42.0 |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above.
## Summary
We have fine-tuned the open-lama model and surpassed the original model in multiple evaluation subtasks, making it currently the best performing 3B model with comparable performance to llama-7b
- Base model: [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="CobraMamba/mamba-gpt-3b",
torch_dtype="auto",
trust_remote_code=True,
use_fast=False,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?</s><|answer|>
```
Alternatively, you can download the mamba_gpt_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from mamba_gpt_pipeline import MambaGPTTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"CobraMamba/mamba-gpt-3b",
use_fast=False,
padding_side="left",
trust_remote_code=False,
)
model = AutoModelForCausalLM.from_pretrained(
"CobraMamba/mamba-gpt-3b",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=False,
)
generate_text = MambaGPTTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "CobraMamba/mamba-gpt-3b" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?</s><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=False,
trust_remote_code=False,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=False,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=1024,
do_sample=False,
num_beams=1,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | finetuned-GPT 3B | OpenLLaMA 3B |
| ---------------------- | -------- | ------------ |
| anli_r1/acc | **0.35** | 0.33 |
| anli_r2/acc | **0.33** | 0.32 |
| anli_r3/acc | 0.35 | 0.35 |
| arc_challenge/acc | **0.35** | 0.34 |
| arc_challenge/acc_norm | 0.37 | 0.37 |
| arc_easy/acc | **0.71** | 0.69 |
| arc_easy/acc_norm | 0.65 | 0.65 |
| boolq/acc | **0.72** | 0.66 |
| hellaswag/acc | **0.49** | 0.43 |
| hellaswag/acc_norm | 0.66 | **0.67** |
| openbookqa/acc | 0.26 | **0.27** |
| openbookqa/acc_norm | 0.40 | 0.40 |
| piqa/acc | **0.76** | 0.75 |
| piqa/acc_norm | 0.76 | 0.76 |
| record/em | 0.88 | 0.88 |
| record/f1 | 0.88 | **0.89** |
| rte/acc | 0.55 | **0.58** |
| truthfulqa_mc/mc1 | **0.27** | 0.22 |
| truthfulqa_mc/mc2 | **0.37** | 0.35 |
| wic/acc | **0.49** | 0.48 |
| winogrande/acc | **0.63** | 0.62 |
| Average | **0.53** | 0.52 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
jondurbin/airoboros-13b-gpt4-1.4-fp16 | jondurbin | 2023-06-25T11:39:17Z | 1,634 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-22T10:46:42Z | ---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4
---
float16 version of https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4 |
Yntec/Noosphere_v3_CVAE | Yntec | 2024-04-06T07:05:45Z | 1,634 | 2 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"skumerz",
"Rexts",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-24T05:18:51Z | ---
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- skumerz
- Rexts
---
# Noosphere v3 Color 101 VAE
The ultimate version of Noosphere v3 by skumerz! Better definition, better contrast and saturation! Thanks to Rexts's Color 101 VAE! The best 2.5D model just got better. Limited supply, get it while it lasts!
Original pages:
https://civitai.com/models/36538?modelVersionId=107675
https://civitai.com/models/70248/color101-vae |
golaxy/gogpt2-13b | golaxy | 2023-08-12T09:19:39Z | 1,634 | 4 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"llama",
"text-generation",
"llama2",
"chinese-llama2",
"gogpt2-7b",
"zh",
"dataset:c-s-ale/alpaca-gpt4-data-zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-31T13:32:31Z | ---
license: apache-2.0
datasets:
- c-s-ale/alpaca-gpt4-data-zh
language:
- zh
tags:
- llama2
- chinese-llama2
- gogpt2-7b
---
# GoGPT2-13B: 基于Llama2-13b训练的中英文增强大模型

<p align="center">
<img alt="GitHub" src="https://img.shields.io/github/license/ymcui/Chinese-LLaMA-Alpaca.svg?color=blue&style=flat-square">
<img alt="GitHub top language" src="https://img.shields.io/github/languages/top/ymcui/Chinese-LLaMA-Alpaca">
</p>
> ICT中英文底座增强大模型:70亿参数、130亿参数
🤗Huggingface上提供了GoGPT权重,目前开放了gogpt-7b和gogpt2-7b权重
| 模型名称 | 基座模型 | 模型大小 | 下载地址 |
|-------------------------------------------------------------|-----------|------|-------------------------------------------------|
| [golaxy/gogpt-7b](https://huggingface.co/golaxy/gogpt-7b) | Llama-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt-7b) |
| [golaxy/gogpt2-7b](https://huggingface.co/golaxy/gogpt2-7b) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b) |
| [golaxy/gogpt2-7b-pretrain](https://huggingface.co/golaxy/gogpt2-7b-pretrain) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b-pretrain) |
| [golaxy/gogpt2-13b-pretrain](https://huggingface.co/golaxy/gogpt2-13b-pretrain) | Llama2-13b | 13B | [模型下载](https://huggingface.co/golaxy/gogpt2-13b-pretrain) |
| [golaxy/gogpt2-13b](https://huggingface.co/golaxy/gogpt2-13b) | Llama2-13b | 13B | [模型下载](https://huggingface.co/golaxy/gogpt2-13b) |
[GoGPT-Github](https://github.com/yanqiangmiffy/GoGPT)
## 测试效果
- 怎么学习机器学习
```
好的,以下是一些学习机器学习的方法:
1. 上下线:通过实践项目来学习,例如使用开源机器学习框架(如TensorFlow、Keras、Scikit-Learn等)来实现各种机器学习任务。
2. 课程学习:选择一些在线或线下的机器学习课程,例如Coursera、Udemy、edX等,这些平台都提供了各种机器学习课程。
3. 阅读书籍:选择一些经典的机器学习书籍,例如Andrew Ng的《深度学习》、《机器学习》等,这些书籍可以帮助你深入地学习机器学习基础知识。
4. 加入社区:加入机器学习社区(如Kaggle、Github、Reddit等),与其他机器学习爱好者交流和分享经验,从而不断学习和改进。
5. 参加比赛:参加一些机器学习比赛(如Kaggle等),可以通过解决实际问题来学习并提高自己的技能。
总之,学习机器学习需要不断实践和探索,找到适合自己的学习方法。
```
- 撰写一篇交响乐音乐会评论,讨论乐团的表演和观众的整体体验。
```
交响乐音乐会是音乐爱好者和古典音乐的爱好者的梦想。上周末,我参加了一场交响乐音乐会,我的经历真的很难忘。
首先,乐团的表演非常出色。他们都是专业音乐家,他们的演奏技巧非常娴熟。他们在演奏时,展现出了音乐的真谛,让人感受到音乐的力量和魅力。同时,他们还非常出色地演绎了交响乐中的不同元素,如高音、低音、节奏、调性和变化等。他们的表演非常自然,没有过分的紧张或放松,给人一种平和的、和谐的感觉。
其次,观众的整体体验也非常不错。音乐会现场非常宽敞明亮,座椅间隔非常宽敞。观众可以自由地欣赏音乐的每一个细节,同时还可以享受到音乐的魅力。同时,音乐会现场的氛围也非常友好,观众之间相互交流,享受音乐的共品。此外,音乐会现场还设有各种美食和饮料供观众品尝,增加了观众的舒适度。
总的来说,这次交响乐音乐会是一次非常愉快的音乐体验。乐团的表演非常出色,观众的整体体验也非常不错。我真的非常期待下一场音乐会,再次领略到音乐的魅力。
```
- 为一部最近上映的科幻电影撰写一个引人入胜的影评,讨论它的情节,角色和特效。
```
《星球大战:新纪元》是一部令人兴奋的科幻电影,它延续了之前的星球大战系列的故事情节和角色,同时也带来了新的挑战和惊喜。
这部电影的情节非常引人入胜,它讲述了一群反抗军与帝国之间的斗争,反抗军试图推翻帝国的统治,而帝国则竭力维持其统治。故事情节的复杂程度和紧张程度让人惊叹,每个角色都有自己独特的动机和经历,这使得整个电影的故事更加生动。
这部电影的主要角色包括雷伊(Rey),一个年轻的女武士,以及她的导师凯诺(Kylo Ren),一名邪恶的武士。这两个角色的互动和冲突是电影中的一个重要焦点,他们的故事情节充满了神秘和悬念,使得观众期待着下一刻会发生什么。此外,电影中还有一些新角色的出现,如米斯·恩德(Maisy Endor),一个新的星球上反抗军的领导者,以及C-3PO和R2-D2等经典的机器人角色,这些角色的回归和表现也都令人兴奋。
这部电影的视觉效果也非常惊人,它使用了最新的技术和特效来营造出逼真的宇宙场景和战斗场面。电影中的星球、建筑物、机械设备和武器的视觉效果都非常逼真,而且动作场面也非常刺激。电影中的视觉效果让人感到惊奇,也使观众更加沉浸在电影中的世界。
总的来说,《星球大战:新纪元》是一部非常值得一看的科幻电影,它讲述了一个引人入胜的故事,有着精彩的角色和特效,使得整个电影令人兴奋和震撼。这部电影的成功,也预示着星球大战系列的未来。
```
- 人类平均一生眨眼多少次?请尝试解释你的答案。你的解释应该逐步带领读者理解你的推理过程
```
人类平均一生眨眼几次是5600000次左右。这个数字是由许多不同的研究结果和估算得出的。
首先,眨眼频率随年龄的增加而变化。婴儿眨眼频率比成人高,随着年龄的增长,眨眼频率逐渐下降。根据这个规律,我们可以计算出成人眨眼频率约为10次/分钟。
其次,眨眼频率还受到多种因素的影响。例如,当人们感到紧张或疲劳时,眨眼频率会增加。同样地,当人们感到愉悦或放松时,眨眼频率会降低。此外,不同的身体部位的眨眼频率也不同。例如,眼睛内侧的眨眼频率比内侧的眨眼频率高。
最后,由于人们的眼睛健康状况也会影响眨眼频率,例如眼底病变或视力问题。这些因素都可以导致眨眼频率的改变。
综上所述,人类平均一生眨眼几次约为5600000次左右。这个数字是由多个因素影响的结果,包括年龄、身体状况、情感状态等。
```
- 请用python编程语言实现快速排序算法的代码
以下是使用Python快速排序算法的示例代码:
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
else:
pivot = arr[0]
left = []
right = []
for i in range(1, len(arr)):
if arr[i] < pivot:
left.append(arr[i])
else:
right.append(arr[i])
return quick_sort(left) + [pivot] + quick_sort(right)
```
这个函数接受一个整数数组作为输入,并在最坏情况下的时间复杂度为O(n^2)的情况下,最佳情况下时间复杂度为O(nlogn)。
- 如何提高我的时间管理技能?
```
以下是一些提高时间管理技能的建议:
1. 制定计划:每天早上或每天开始时,制定一个详细的计划,包括每天需要完成的任务和目标。
2. 优先级排序:将任务按照优先级排序,先完成重要任务。
3. 集中注意力:集中注意力,将注意力集中在当前的任务上,避免分心。
4. 避免拖延:尽可能避免拖延,尽快完成任务,避免任务过期。
5. 避免多任务:多任务会增加任务的时间,所以最好只专注一个任务,直到完成它。
6. 休息:休息是恢复和提高生产力的关键,因此定期休息,放松身心。
7. 调整时间:如果发现自己的时间管理有问题,可以调整时间,例如延长休息时间或缩短任务时间。
8. 学习新技能:学习新的技能可以帮助提高工作效率,例如使用工具或软件来管理任务。
9. 寻求帮助:如果你发现自己难以管理时间,可以寻求家人、朋友或同事的帮助。
希望这些建议可以帮助您提高时间管理技能。
```
- Python和JavaScript编程语言的主要区别是什么?
```
Python和JavaScript是两个不同的编程语言,它们有以下主要区别:
1. 数据类型:Python是一种动态类型的编程语言,它无法指定数据类型,而是根据变量中使用的值自动确定数据类型。JavaScript是一种动态类型编程语言,它使用字面量来指定变量的数据类型。
2. 语法:Python和JavaScript的语法有一些相似之处,但也有一些不同之处。例如,Python使用缩进来定义代码块,而JavaScript使用花括号。
3. 执行环境:Python通常运行在本地计算机的Python解释器中运行,而JavaScript通常运行在浏览器中运行。这意味着Python通常用于开发桌面应用程序和Web服务器端应用程序,而JavaScript主要用于开发Web应用程序。
4. 编程语言特性:Python具有更高的可读性和可维护性,而JavaScript具有更好的可扩展性和可维护性。
总之,Python和JavaScript是两种不同的编程语言,具有不同的特点和应用场景。选择哪种编程语言取决于项目的具体需求和开发者的技能水平。
```
- 植物蛋白和动物蛋白的主要区别是什么?
```
植物蛋白和动物蛋白的主要区别在于来源和氨基酸成分。
来源:
植物蛋白主要来自植物组织,如豆类、种子、果实、叶子等。而动物蛋白主要来自动物的身体部位,如肌肉、内脏、鱼类、蛋类等。
氨基酸成分:
植物蛋白中的主要氨基酸是植物性氨基酸,如氨基酸,植物性必需氨基酸(如硫胺素和植酸)等。而动物蛋白中的主要氨基酸是动物性氨基酸,如肌肉蛋白、乳糖蛋白等。
因此,植物蛋白和动物蛋白的营养价值是不同的。植物蛋白富含植物性必需氨基酸,如硫胺素、植酸等,而动物蛋白则含有更多的动物性必需氨基酸,如肌肉蛋白、乳糖蛋白等。植物蛋白通常比动物蛋白低热量,但也有些植物蛋白的蛋白质质量较低。动物蛋白通常比植物蛋白高蛋白质质量,但同样含有较多的饱和脂肪和胆固醇等有害物质。
```
- 影响消费者行为的主要因素是什么?
```
影响消费者行为的因素有很多,以下是其中几个主要的因素:
1. 消费需求:消费者的需求是影响消费者行为的主要因素之一。消费者的需求是根据个人利益和价值观而产生的。例如,如果一个消费者喜欢时尚的服装,那么他们可能会对购买这些服装的商场和品牌的选择更有吸引力。
2. 购买力:消费者购买力的限制是一个影响消费者行为的因素。消费者可能会因为预算限制而选择购买某些商品。
3. 价格:价格也是影响消费者行为的因素之一。消费者通常会比较不同商品的价格,并根据价格选择购买哪种商品。
4. 品牌知名度:品牌知名度是影响消费者行为的因素之一。消费者通常会选择有信誉的品牌,因为他们相信这些品牌能够提供高质量的产品和服务。
5. 社交媒体:社交媒体在影响消费者行为中扮演越来越重要的作用。许多消费者会在社交媒体上了解商品和品牌的信息,并选择购买这些商品。
6. 社交媒体:社交媒体在影响消费者行为中扮演越来越重要的角色。许多消费者会在社交媒体上了解商品和品牌的信息,并选择购买这些商品。
总之,影响消费者行为的主要因素有很多,这些因素往往是相互关联的。企业可以通过了解这些因素,来制定有效的营销策略,吸引和留住消费者。
```
|
HyperbeeAI/Tulpar-7b-v0 | HyperbeeAI | 2023-09-13T19:04:10Z | 1,634 | 22 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-23T10:13:55Z | ---
license: llama2
language:
- en
library_name: transformers
thumbnail: "https://huggingface.co/HyperbeeAI/Tulpar-7b-v0/resolve/main/tulpar.png"
---
<p align="center">
<img src="https://huggingface.co/HyperbeeAI/Tulpar-7b-v0/resolve/main/tulpar.png" width="360" height="360" >
</p>
# Model Description
Tulpar-7b is a LLama2-7b-based model trained by HyperbeeAI. Training is done on a filtered and preprocessed instruction finetuning dataset that includes GPT-4 generated and generally curated datasets like Airoboros and Platypus.
# Example Usage
Loading the model:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("HyperbeeAI/Tulpar-7b-v0")
model = AutoModelForCausalLM.from_pretrained("HyperbeeAI/Tulpar-7b-v0", device_map="auto")
```
You can run inference with both of the following prompts:
```python
input_text="What is deep learning?"
prompt = f"### User: {input_text}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=512)
print(tokenizer.decode(output[0]))
```
```python
input_text="What is deep learning?"
prompt = f"Question: {input_text}\n\nAnswer:"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=512)
print(tokenizer.decode(output[0]))
```
# Evaluation
Our offline HF Leaderboard evaluation results:
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|0.5614|
|*hellaswag*|acc_norm|0.7901|
|*mmlu*|acc_norm|0.5242|
|*truthfulqa_mc*|mc2|0.5160|
|**Average**|-|**0.5979**||
Other GPT4All evaluation results:
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|boolq|acc |0.8306|
|piqa|acc |0.7905|
| |acc_norm|0.7884|
|winogrande|acc |0.7159|
|openbookqa|acc |0.356|
| |acc_norm|0.448|
|**Average** (including HF leaderboard datasets) | | **0.6468** |
BigBenchHard results:
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|bigbench_causal_judgement |multiple_choice_grade|0.6105|
|bigbench_date_understanding |multiple_choice_grade|0.6423|
|bigbench_disambiguation_qa |multiple_choice_grade|0.3643|
|bigbench_dyck_languages |multiple_choice_grade|0.2000|
|bigbench_formal_fallacies_syllogisms_negation |multiple_choice_grade|0.5002|
|bigbench_geometric_shapes |multiple_choice_grade|0.0000|
| |exact_str_match |0.0000|
|bigbench_hyperbaton |multiple_choice_grade|0.6754|
|bigbench_logical_deduction_five_objects |multiple_choice_grade|0.2700|
|bigbench_logical_deduction_seven_objects |multiple_choice_grade|0.1929|
|bigbench_logical_deduction_three_objects |multiple_choice_grade|0.4133|
|bigbench_movie_recommendation |multiple_choice_grade|0.3000|
|bigbench_navigate |multiple_choice_grade|0.5000|
|bigbench_reasoning_about_colored_objects |multiple_choice_grade|0.5750|
|bigbench_ruin_names |multiple_choice_grade|0.3281|
|bigbench_salient_translation_error_detection |multiple_choice_grade|0.2976|
|bigbench_snarks |multiple_choice_grade|0.6022|
|bigbench_sports_understanding |multiple_choice_grade|0.5122|
|bigbench_temporal_sequences |multiple_choice_grade|0.1450|
|bigbench_tracking_shuffled_objects_five_objects |multiple_choice_grade|0.1976|
|bigbench_tracking_shuffled_objects_seven_objects|multiple_choice_grade|0.1440|
|bigbench_tracking_shuffled_objects_three_objects|multiple_choice_grade|0.4133|
|**Average**| |**0.3754**
# Ethical Considerations and Limitations
Tulpar is a technology with potential risks and limitations. This model is finetuned only in English and all language-related scenarios are not covered. As HyperbeeAI, we neither guarantee ethical, accurate, unbiased, objective responses nor endorse its outputs. Before deploying this model, you are advised to make safety tests for your use case. |
Sao10K/Medusa-13b | Sao10K | 2023-11-18T04:33:07Z | 1,634 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-28T14:22:41Z | ---
license: llama2
language:
- en
---
<img src="https://s1.zerochan.net/Lancer.%28Medusa%29.600.2885725.jpg" style="width: 50%; min-width: 200px; display: block; margin: auto;">
<br>My First ever attempt at Model Merging. I have zero experience, like zero, none. I don't know how to code, or use linux. Getting the terminal to work on Runpod was a nightmare.
<br>Experimental Model I posted. Usable in RPs but it is not the best, compared to top dog models like Mythomax.
<br> Just proof that a clueless guy let me can work on models.
<br> I'm still learning basics on fine-tuning and training models.
<br> I am seriously reading up on various model merging methods, and what models are best to use. I'll leave this here as my first work.
Base Model: [Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16)
<br> ---**Merged with**
<br> ----- [chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2)
<br> ----- [Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)
<br> ----- [airoboros-l2-13b-gpt4-m2.0](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-m2.0)
<br> ----- [OpenOrca-Platypus2-13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)
<br> ----- [Kimiko-13B](https://huggingface.co/TheBloke/Kimiko-13B-fp16)
<br> ----- [StableBeluga-13B](https://huggingface.co/stabilityai/StableBeluga-13B)
Rationale:
<br> Chronos - Used Chronoboros L1 so I put them here
<br> Nous-Hermes - First ever L2 Model I used (づ ᴗ _ᴗ)づ♡
<br> Airoboros - Used Chronoboros L1 so I put them here
<br> OpenOrca-Platypus - platypus.
<br> Kimiko - Cute sounding name
<br> StableBeluga - favourite animal
<br> Prompt Format? No clue. I've been using Alpaca and It's been working fine.
<br>Script used to Merge [here](https://github.com/cg123/ties-merge)
<br>Thank you for the easy to set up script, [Chargoddard](https://huggingface.co/chargoddard) !
<br>Command used below:
```
python ties_merge.py TheBloke/Llama-2-13B-fp16 ./Medusa-13b --merge elinas/chronos-13b-v2 --merge NousResearch/Nous-Hermes-Llama2-13b --merge jondurbin/airoboros-l2-13b-gpt4-m2.0 --merge Open-Orca/OpenOrca-Platypus2-13B --merge TheBloke/Kimiko-13B-fp16 --merge stabilityai/StableBeluga-13B --cuda
```
<br>Do not let me cook!
```
Image Credit: https://www.pixiv.net/en/artworks/80169912
Artist: TEL-O
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Medusa-13b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 49.85 |
| ARC (25-shot) | 58.19 |
| HellaSwag (10-shot) | 81.35 |
| MMLU (5-shot) | 57.39 |
| TruthfulQA (0-shot) | 51.24 |
| Winogrande (5-shot) | 73.32 |
| GSM8K (5-shot) | 6.82 |
| DROP (3-shot) | 20.61 |
|
uukuguy/speechless-llama2-luban-orca-platypus-13b | uukuguy | 2023-09-01T06:28:52Z | 1,634 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-2",
"en",
"dataset:garage-bAInd/Open-Platypus",
"arxiv:2307.09288",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-01T02:43:40Z | ---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
datasets:
- garage-bAInd/Open-Platypus
library_name: transformers
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
<p><h1> speechless-llama2-orca-platypus-13b </h1></p>
speechless-llama2-orca-platypus-13b is a merge of AIDC-ai-business/Luban-13B and Open-Orca/OpenOrca-Platypus2-13B.
| Metric | Value |
| --- | --- |
| ARC | 62.54 |
| HellaSwag | 82.76 |
| MMLU | 59.23 |
| TruthfulQA | 54.66 |
| Average | 64.80 |
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
Faradaylab/ARIA-70B-V3 | Faradaylab | 2023-10-10T14:02:44Z | 1,634 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-11T09:34:47Z | ---
license: other
---
ARIA V3 has been trained over 100.000 high quality french language with a focus on data bias, grammar and overall language/writing capacities of the model.
The training has been done on Nvidia GPU in the cloud with Amazon Sagemaker.
Base Model : Llama2-70B-Chat-HF
Dataset : private dataset.
Added value : French Language / Writing / Content Creation / Data bias reduction
Feel free to reach out to us ! [email protected]
|
CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-gate_up_down | CHIH-HUNG | 2023-09-19T22:24:10Z | 1,634 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-19T22:09:24Z | Entry not found |
amazingvince/zephyr-smol_llama-100m-sft-full | amazingvince | 2023-11-20T17:27:19Z | 1,634 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:BEE-spoke-data/smol_llama-101M-GQA",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-20T16:50:31Z | ---
license: apache-2.0
base_model: BEE-spoke-data/smol_llama-101M-GQA
tags:
- generated_from_trainer
model-index:
- name: zephyr-smol_llama-100m-sft-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-smol_llama-100m-sft-full
This model is a fine-tuned version of [BEE-spoke-data/smol_llama-101M-GQA](https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9579
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9642 | 0.7 | 1141 | 1.9578 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
BreadAi/gpt-YA-1-1_70M | BreadAi | 2023-11-18T10:45:28Z | 1,633 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"dataset:breadlicker45/1m-YA-dataset",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-02-01T20:09:00Z | ---
datasets:
- breadlicker45/1m-YA-dataset
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BreadAi__gpt-YA-1-1_70M)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 25.64 |
| ARC (25-shot) | 22.53 |
| HellaSwag (10-shot) | 27.37 |
| MMLU (5-shot) | 25.38 |
| TruthfulQA (0-shot) | 47.09 |
| Winogrande (5-shot) | 50.91 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 6.18 |
|
h2oai/h2ogpt-gm-oasst1-en-1024-12b | h2oai | 2023-05-02T19:15:21Z | 1,633 | 5 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-05-02T12:02:19Z | ---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
license: apache-2.0
datasets:
- OpenAssistant/oasst1
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [EleutherAI/pythia-12b-deduped](https://huggingface.co/EleutherAI/pythia-12b-deduped)
- Dataset preparation: [OpenAssistant/oasst1](https://github.com/h2oai/h2o-llmstudio/blob/1935d84d9caafed3ee686ad2733eb02d2abfce57/app_utils/utils.py#LL1896C5-L1896C28)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `torch` libraries installed.
```bash
pip install transformers==4.28.1
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="h2oai/h2ogpt-gm-oasst1-en-1024-12b",
torch_dtype=torch.float16,
trust_remote_code=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-1024-12b",
padding_side="left"
)
model = AutoModelForCausalLM.from_pretrained(
"h2oai/h2ogpt-gm-oasst1-en-1024-12b",
torch_dtype=torch.float16,
device_map={"": "cuda:0"}
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "h2oai/h2ogpt-gm-oasst1-en-1024-12b" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50688, 5120)
(layers): ModuleList(
(0-35): 36 x GPTNeoXLayer(
(input_layernorm): LayerNorm((5120,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((5120,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=5120, out_features=15360, bias=True)
(dense): Linear(in_features=5120, out_features=5120, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=5120, out_features=20480, bias=True)
(dense_4h_to_h): Linear(in_features=20480, out_features=5120, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((5120,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=5120, out_features=50688, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=h2oai/h2ogpt-gm-oasst1-en-1024-12b --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.3345|± |0.0138|
| | |acc_norm|0.3754|± |0.0142|
|arc_easy | 0|acc |0.6435|± |0.0098|
| | |acc_norm|0.5800|± |0.0101|
|boolq | 1|acc |0.5098|± |0.0087|
|hellaswag | 0|acc |0.5150|± |0.0050|
| | |acc_norm|0.6951|± |0.0046|
|openbookqa | 0|acc |0.3080|± |0.0207|
| | |acc_norm|0.3980|± |0.0219|
|piqa | 0|acc |0.7704|± |0.0098|
| | |acc_norm|0.7704|± |0.0098|
|winogrande | 0|acc |0.6622|± |0.0133|
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |
Rallio67/7B-redpajama-conditional-alpha | Rallio67 | 2023-05-14T18:30:16Z | 1,633 | 7 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-05-09T01:30:27Z | # Conditional Pretraining of Large Language Models
Large language models (LLMs), such as OpenAI's ChatGPT and similar chatbot products from other organizations, have recently gained widespread adoption. These models can extend text or respond to instructions in a natural and helpful manner. Despite the core technologies behind LLMs, namely the transformer architecture and the GPT decoder-only causal language model, remaining relatively unchanged for over five years, the surge in popularity of ChatGPT can be largely attributed to recent approaches that better align the output of LLMs with users' and service providers' intentions.
## Primary Approaches for Aligning LLMs with Human Expectations
1. **Supervised finetuning (SFT) on natural instructions**
2. **Reinforcement learning from human feedback (RLHF)**
## Conditional Pretraining: A Third Approach
- Utilizes a large number of pretraining examples tagged with human-understandable classifiers
- Leverages content tagging found in many online environments
- Examples of commonly used tags:
- Suitable for work (SFW) and not suitable for work (NSFW)
- G, PG, PG-13, and R for television and movie content
## Converting Existing Pretraining Data into Conditional Pretraining Data
- Traditional pretraining involves predicting the subsequent word in minimally processed text.
- Conditional pretraining prepends training examples with descriptive tags and a brief synopsis.
## Transparency and Accountability
- Current LLMs have proprietary instructions and reward models, which can hinder public review and discussions on sensitive topics.
- Conditional pretraining tags are transparent and easily understood by auditors or end users.
## Conditional pretraining example
An example output from this conditional tagging model for a recent news article about LAION.
[`Article Here`](https://www.forbes.com/sites/hessiejones/2023/04/19/amid-growing-call-to-pause-ai-research-laion-petitions-governments-to-keep-agi-research-open-active-and-responsible/) is below. To generate these document tags only text from the body of the article was used.
```
[ artificial intelligence, open source, ai, open letter, open source ai, ai research]
# This article explains the importance of a CERN-like organization to coordinate efforts on the transparency of large-scale AI research and provides information about LAION.
```
## How to use the model
Format your inputs like this:
```
[ tag1, tag2, tag3, tag_n]
# This is a short synopsis of what kind of text I want to generate.
```
## Acknowledgement
Thank you to LAION and Stability.ai for support and compute resources to experiment with conditional pretraining.
## Disclaimer
- Conditional pretraining helps the user control the outputs of the model.
- However, these models (and all language models) can still generate undesirable content.
- So please enjoy and use with care!
|
Fredithefish/ReasonixPajama-3B-HF | Fredithefish | 2023-09-01T11:01:51Z | 1,633 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"en",
"dataset:Fredithefish/Reasonix",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-07T17:20:44Z | ---
license: apache-2.0
datasets:
- Fredithefish/Reasonix
language:
- en
library_name: transformers
inference: false
---
# ReasonixPajama |
64bits/LexPodLM-13B | 64bits | 2023-11-17T21:38:40Z | 1,633 | 10 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:64bits/lex_fridman_podcast_for_llm_vicuna",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-10T06:17:34Z | ---
license: other
datasets:
- 64bits/lex_fridman_podcast_for_llm_vicuna
language:
- en
library_name: transformers
tags:
- llama
- transformers
---
LexPodLM-13B is a LLaMA-13B model fine-tuned using Vicuna's schema, with this dataset: https://huggingface.co/datasets/64bits/lex_fridman_podcast_for_llm_vicuna
While it might not excel in tasks like code writing, fact searching, or essay writing, it can potentially serve as an effective chat model, capable of providing verbose and nuanced responses.
I trained the model on 4 A100 (80 GB RAM) for ~15 hrs.
***
Eval (evaluate models like [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) using [Eleuther AI Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness])):
| Model | Average | ARC (25-shot) | HellaSwag (10-shot) | MMLU (5-shot) | TruthfulQA (0-shot) |
| ------------ | ------- | ------------- | ------------------- | ------------- | ------------------- |
| LexPodLM-13B | 52.5 | 48.8 | 79.4 | 37.9 | 43.8 |
| LLaMA-13B | 51.8 | 50.8 | 78.9 | 37.7 | 39.9 |
***
*Human: do you think consciousness is foundamentally computational?*
*Assistant: I think consciousness is foundational computational. Yes. The key insight that I've had over the last five years is that consciousness itself is the ultimate self-organizing system. It's the ultimate emergent of the universe. It's the ultimate thing that's not explainable in terms of its parts. So it's not that I think brains are computation. I think, you know, brains are made of neurons, and neurons are made of cells. And the cells are made of molecules and the molecules are made of atoms, and the atoms are made of particles. And we can explain all of those things in terms of their parts. There's no emergent physics in physics. There are no emergent laws in physics. And you know, people have been trying to get to this level. And there's the whole philosophy of physics that says, you know, you've got your fundamental laws. Let's just explain everything in terms of the fundamental laws. But that's just not how physics works. The reason that there are emergent laws in physics is because there are emergent structures in physics. And the first emergent structure is the cell. The cell is a structure that doesn't exist in physics. It's an emergent structure. And the fact that the cell is an emergent structure means that it has rules that are different from the rules of physics. And the same thing with brains. Brains have rules that are different from the rules of physics. And the same thing with consciousness. Consciousness, the fact that it exists at all is an emergent fact. It's not a fact that you can reduce to the underlying physics. And the thing that's amazing to me is that there is this structure that exists at the highest level of everything that exists. And it's not an emergent. So the fact that you and I have consciousness is not an emergent fact. It's a foundational fact. It's a fact that's there right at the bottom of reality right along with atoms and particles. I mean, it's not that I think that brains are conscious. I think that brains are a particular way of organizing consciousness. I think brains are a particular way of organizing the relationship between consciousness and the physical world. And so when people say to me, you know, well, what do you think?*
***
*Human: What are the ethical implications of artificial intelligence?*
*Assistant: The ethical implications of artificial intelligence are significant. As AI systems become more advanced and integrated into our daily lives, we have to be aware of their limitations and potential unfair biases. Such systems should be designed with a focus on fairness, transparency, and security. AI systems should not be used to replace human decision-making. Their output should be easily interpretable, so that decisions can be traced to rationales and justifications. We should also prioritize the well-being of humans over the well-being of AI systems. We need to ensure that AI systems are designed in a way that they do not harm users or other people. This can be achieved by building in safety mechanisms and testing systems for potential biases and security vulnerabilities.*
***
*Human: How do you think quantum computing will affect our future?*
*Assistant: Quantum computing has the potential to change the way we compute in many ways. One of the most significant benefits will be the possibility of solving previously unsolvable problems and accelerating the development of machine learning and artificial intelligence. By harnessing the power of quantum computers, we can make machines that are exponentially more powerful than any existing computer.*
***
*Human: What are your thoughts on the Singularity theory?*
*Assistant: I am a strong proponent of the singularity. I have seen many technological advancements in my career. I've seen the power of exponential growth in technology. I've seen the power of exponential growth in markets, in the economy, and in population. And I'm very excited about the future. I'm very excited about what's going to happen. And I think that it's important for the world to understand that. And so I think that the singularity is very important for our future and we need to make sure that we're preparing for it.*
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_64bits__LexPodLM-13B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 49.51 |
| ARC (25-shot) | 57.76 |
| HellaSwag (10-shot) | 81.04 |
| MMLU (5-shot) | 48.38 |
| TruthfulQA (0-shot) | 43.48 |
| Winogrande (5-shot) | 76.16 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 39.71 |
|
psyche/kollama2-7b-v2 | psyche | 2023-11-18T10:17:42Z | 1,633 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-29T07:45:34Z | ### Under Construction
This model has some decoding patterns that origin from the fine-tunned dataset. So I will train the model to remove these patterns.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_psyche__kollama2-7b-v2)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 44.91 |
| ARC (25-shot) | 53.33 |
| HellaSwag (10-shot) | 78.5 |
| MMLU (5-shot) | 43.61 |
| TruthfulQA (0-shot) | 46.37 |
| Winogrande (5-shot) | 75.61 |
| GSM8K (5-shot) | 6.52 |
| DROP (3-shot) | 10.4 |
|
ashercn97/manatee-7b | ashercn97 | 2023-08-02T16:56:51Z | 1,633 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"orca",
"transformer",
"cool",
"en",
"dataset:ashercn97/OpenOrcaPleaseWork",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-30T12:09:39Z | ---
datasets:
- ashercn97/OpenOrcaPleaseWork
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- orca
- transformer
- cool
---
This is based on llama-2-7b and fine-tuned on two Orca datasets. It took around 6 hours on 1 L40 GPU.
Manatee is one of my first projects, so I hope you enjoy using it! To use it, you can either use it through the transformer library or if you have limited memory, you can use the GPTQ version that is on my profile!
In the future, I plan on fine-tuning higher parameter models or making a better version of Manatee-7b. |
Aspik101/llama-30b-2048-instruct-PL-lora_unload | Aspik101 | 2023-08-07T08:37:07Z | 1,633 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-07T08:17:15Z | ---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
--- |
yeontaek/llama-2-13b-Beluga-QLoRA | yeontaek | 2023-08-20T07:05:25Z | 1,633 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-20T05:52:27Z | Entry not found |
yeontaek/Platypus2xOpenOrca-13B-LoRa-v2 | yeontaek | 2023-08-23T04:16:08Z | 1,633 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-23T04:01:44Z | Entry not found |
jondurbin/airoboros-33b-2.1 | jondurbin | 2023-09-03T10:29:04Z | 1,633 | 9 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"dataset:jondurbin/airoboros-2.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-01T03:49:37Z | ---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-2.1
---
### Overview
This is an instruction fine-tuned llama 30b model, using synthetic data generated by [airoboros](https://github.com/jondurbin/airoboros)
- Experimental RP style instruction set, with two categories: rp and gtkm
- rp includes multi-round chats, with emotes, between a varying number of characters, defined by cards
- gtkm is a way to test a simpler alternative to ghost attention - first, a character card is generated, then several questions are created to ask the model (as the character), using the character system prompt, then everything in synthesized into a dialog (one system prompt, all turns remain in character)
- Experimental support for longer, more detailed writing prompts, as well as next-chapter generation
- I used the new `cull-instructions` entrypoint in airoboros to shrink the m2.0 dataset to a smaller subset of high-quality instructions (according to gpt-4)
- The training data now also includes "stylized_response", in which 1500 sample instructions from various categories were re-generated using character cards as system prompts.
- this should allow better adherence to style/etc. specified in the system card
- Thousands of new generations, using some of the updates re: Flesch hints, etc., to get longer/higher quality writing outputs.
- A small "de-alignment" dataset was also added (not published) to remove some of the censorship in the base models.
I used rope scaling to increase context length to 4096, but there wasn't a huge amount of data in that range so YMMV. You may need to rename the directory to `airoboros-33b-2-1` (anything without the `.` really) so the trust_remote_code=True finds the module path properly.
*Why do I try to remove censorship?*
- laws vary widely based on time and location
- language model may conflate certain words with laws, e.g. it may think "stealing eggs from a chicken" is illegal
- these models just produce text, what you do with that text is your resonsibility
- many people and industries deal with "sensitive" content; imagine if a court stenographer's equipment filtered illegal content - it would be useless
Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools!
### Prompt format
The training code was updated to randomize newline vs space:
https://github.com/jondurbin/qlora/blob/main/qlora.py#L559C1-L559C1
```
A chat. USER: {prompt} ASSISTANT:
```
or
```
A chat.
USER: {prompt}
ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space or newline, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space or newline, followed by "ASSISTANT: " (with a single space after the colon).
__*I strongly suggest adding stopping criteria/early inference stopping on "USER:", because the training data includes many multi-round chats and could otherwise start simulating a conversation!*__
### Helpful usage tips
*The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.*
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
This model is built on top of the original llama-30b, which has a strict noncommercial license.
The fine-tuning data was generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me. |
CHIH-HUNG/llama-2-13b-FINETUNE3_3.3w-r4-q_k_v_o | CHIH-HUNG | 2023-10-04T13:28:39Z | 1,633 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:huangyt/FINETUNE3",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-19T17:42:51Z | ---
license: llama2
datasets:
- huangyt/FINETUNE3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
在llama-2-13b上使用huangyt/FINETUNE3資料集進行訓練,總資料筆數約3.3w
# Fine-Tuning Information
- **GPU:** RTX4090 (single core / 24564MiB)
- **model:** meta-llama/Llama-2-13b-hf
- **dataset:** huangyt/FINETUNE3 (共約3.3w筆訓練集)
- **peft_type:** LoRA
- **lora_rank:** 16
- **lora_target:** q_proj, k_proj, v_proj, o_proj
- **per_device_train_batch_size:** 8
- **gradient_accumulation_steps:** 8
- **learning_rate :** 4e-4
- **epoch:** 1
- **precision:** bf16
- **quantization:** load_in_4bit
# Fine-Tuning Detail
- **train_loss:** 0.579
- **train_runtime:** 4:6:11 (use deepspeed)
# Evaluation
- 與Llama-2-13b比較4種Benchmark,包含**ARC**、**HellaSwag**、**MMLU**、**TruthfulQA**
- 評估結果使用**本地**所測的分數,並使用load_in_8bit
| Model |Average| ARC |HellaSwag| MMLU | TruthfulQA |
|-----------------------------------------|-------|-------|---------|-------|------------|
| FINETUNE3_3.3w-r4-q_k_v_o | 56.29 | 54.27 | 79.42 | 51.90 | 39.58 |
| FINETUNE3_3.3w-r8-q_k_v_o | 56.53 | 52.99 | 79.45 | 53.53 | 40.14 |
| FINETUNE3_3.3w-r16-q_k_v_o | 56.25 | 53.24 | 79.53 | 54.03 | 38.20 |
| FINETUNE3_3.3w-r4-gate_up_down | 55.79 | 51.02 | 79.37 | 53.36 | 39.40 |
| FINETUNE3_3.3w-r8-gate_up_down | 56.60 | 53.33 | 79.43 | 53.60 | 40.03 |
| FINETUNE3_3.3w-r16-gate_up_down | 56.34 | 51.88 | 79.42 | 54.64 | 39.44 |
| FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down | 56.67 | 53.07 | 79.34 | 54.07 | 40.19 |
| FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down | 56.93 | 54.61 | 79.16 | 53.51 | 40.46 |
| FINETUNE3_3.3w-r16-q_k_v_o_gate_up_down | 57.78 | 53.92 | 79.41 | 54.68 | 43.09 |
-------------------------------------------------------------------------------------------
- 評估結果來自**HuggingFaceH4/open_llm_leaderboard**
| Model |Average| ARC |HellaSwag| MMLU | TruthfulQA |
|-----------------------------------------|-------|-------|---------|-------|------------|
| FINETUNE3_3.3w-r4-q_k_v_o | 58.34 | 59.04 | 81.15 | 53 | 40.16 |
| FINETUNE3_3.3w-r8-q_k_v_o | 58.28 | 56.06 | 81.89 | 55.04 | 40.12 |
| FINETUNE3_3.3w-r16-q_k_v_o | 58.55 | 59.3 | 81.2 | 55.58 | 38.13 |
| FINETUNE3_3.3w-r4-gate_up_down | 57.79 | 56.4 | 81.93 | 53.63 | 39.23 |
| FINETUNE3_3.3w-r8-gate_up_down | 58.17 | 57.25 | 81.79 | 53.96 | 39.66 |
| FINETUNE3_3.3w-r16-gate_up_down | 58.91 | 58.7 | 81.89 | 56.08 | 38.95 |
| FINETUNE3_3.3w-r4-q_k_v_o_gate_up_down | 58.42 | 57.76 | 80.78 | 54.32 | 40.8 |
| FINETUNE3_3.3w-r8-q_k_v_o_gate_up_down | 58.26 | 57.94 | 81.19 | 53.43 | 40.48 |
| FINETUNE3_3.3w-r16-q_k_v_o_gate_up_down | 59.62 | 59.22 | 81.52 | 54.94 | 42.83 |
# How to convert dataset to json
- 在**load_dataset**中輸入資料集名稱,並且在**take**中輸入要取前幾筆資料
- 觀察該資料集的欄位名稱,填入**example**欄位中(例如system_prompt、question、response)
- 最後指定json檔儲存位置 (**json_filename**)
```py
import json
from datasets import load_dataset
# 讀取數據集,take可以取得該數據集前n筆資料
dataset = load_dataset("huangyt/FINETUNE3", split="train", streaming=True)
# 提取所需欄位並建立新的字典列表
extracted_data = []
for example in dataset:
extracted_example = {
"instruction": example["instruction"],
"input": example["input"],
"output": example["output"]
}
extracted_data.append(extracted_example)
# 指定 JSON 文件名稱
json_filename = "FINETUNE3.json"
# 寫入 JSON 文件
with open(json_filename, "w") as json_file:
json.dump(extracted_data, json_file, indent=4)
print(f"數據已提取並保存為 {json_filename}")
``` |
CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-q_k_v_o | CHIH-HUNG | 2023-09-22T00:33:11Z | 1,633 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-22T00:13:36Z | Entry not found |
totally-not-an-llm/EverythingLM-13b-V3-16k | totally-not-an-llm | 2023-09-23T23:30:34Z | 1,633 | 6 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-23T01:18:48Z | ---
license: llama2
datasets:
- totally-not-an-llm/EverythingLM-data-V3
---
# EverythingLM-13b-V3-16k
Introducing EverythingLM, a llama-2 based, general-purpose 13b model with 16k context thanks to LlongMa. The model is trained on the EverythingLM-V3 dataset, more info can be found on the dataset page.
The model is completely uncensored.
Despite being "uncensored", the base model might be resistant; you might have to prompt-engineer certain prompts.
### Quants (Thanks TheBloke!):
https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-GGUF
https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-GPTQ
https://huggingface.co/TheBloke/EverythingLM-13B-V3-16K-AWQ
### Notable features:
- Automatically triggered CoT reasoning.
- Verbose and detailed replies.
- Creative stories.
- Good prompt understanding.
### Differences from V2:
- Much more uncensored.
- Actual roleplaying ability now!
- General all around improvements thanks to the new dataset. Check out the dataset for more info.
### Prompt format (Alpaca-chat):
```
USER: <prompt>
ASSISTANT:
```
### Future plans:
- Highest priority right now is V3.1 with more optimized training and iterative dataset improvements based on testing.
### Note:
Through testing V2, I realized some alignment data had leaked in, causing the model to be less cooperative then intended. This model should do much better due to stricter filetering. |
quantumaikr/quantum-dpo-v0.1 | quantumaikr | 2023-12-17T21:55:57Z | 1,633 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-17T21:32:31Z | ---
license: cc-by-nc-4.0
language:
- en
pipeline_tag: text-generation
---
# quantumaikr/quantum-dpo-v0.1
## Usage
Start chatting with `quantumaikr/quantum-dpo-v0.1` using the following code snippet:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("quantumaikr/quantum-dpo-v0.1")
model = AutoModelForCausalLM.from_pretrained("quantumaikr/quantum-dpo-v0.1", torch_dtype=torch.float16, device_map="auto")
system_prompt = "You are QuantumLM, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal."
message = "Write me a poem please"
prompt = f"[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n{message}[/INST]"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, temperature=0.7, top_p=0.95, top_k=30, max_new_tokens=2048)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
QuantumLM should be used with this prompt format:
```
### System:
This is a system prompt, please behave and help the user.
### User:
Your prompt here
### Assistant
The output of QuantumLM
```
## Use and Limitations
### Intended Use
These models are intended for research only, in adherence with the [CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
### Limitations and bias
Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use it responsibly.
Contact us : [email protected]
|
digitous/Javelin-GPTJ | digitous | 2023-02-25T20:32:59Z | 1,632 | 4 | transformers | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-02-25T01:04:18Z | ---
license: creativeml-openrail-m
---
Javelin-GPTJ is an even 25*4 quad merge of KoboldAI's
GPT-J classics;
(Janeway + Shinen) + (Adventure + Skein).
Janeway + Shinen is listed under JANIN-GPTJ.
Adventure + Skein is listed under Adventien-GPTJ.
Original models can all be found under KoboldAI.
This 4x merge is not intended for minors, as it
can produce NC-17+ content (mostly from Shinen).
GPT-R merge variant will be released if it adds
value to this already "kitchen sink" level of
merging. |
Fredithefish/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4 | Fredithefish | 2023-08-07T14:45:56Z | 1,632 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"en",
"dataset:Fredithefish/Instruction-Tuning-with-GPT-4-RedPajama-Chat",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-05-16T15:07:11Z | ---
license: cc
datasets:
- Fredithefish/Instruction-Tuning-with-GPT-4-RedPajama-Chat
language:
- en
inference: false
---
<html>
<head>
<style>
.alert {
padding: 15px;
background-color: #f44336;
color: white;
}
</style>
</head>
<body>
<div class="alert">
<strong>Warning:</strong> This fine-tuned model has only undergone 200 steps of fine-tuning and may not be reliable. The final model will not be released.
</div>
</body>
</html>
<br>
# RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4
RedPajama-INCITE-Chat-3B Model finetuned <a href="https://huggingface.co/datasets/Fredithefish/Instruction-Tuning-with-GPT-4-RedPajama-Chat">on this dataset</a>
## Reproduction
The code for the finetuning of this model can be found at https://github.com/fredi-python/Fine-tune-RedPajama-Chat-3B
## Usage and License Notices
The Model is intended and licensed for research use only. The model is under the CC BY NC 4.0 license (allowing only non-commercial use) |
jondurbin/airoboros-13b-gpt4-1.1 | jondurbin | 2023-06-22T14:59:32Z | 1,632 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-09T22:06:28Z | ---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.1
---
## Delta 1.1
This is a minor update of https://huggingface.co/jondurbin/airoboros-13b-gpt4 with ~1k more coding instructions, and fixes/improvements to context instructions from https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.1
The remainder of the model card is duplicated from the origin.
## Overview
This is a fine-tuned 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4), with a specific focus on:
- trivia
- math/reasoning (although it still sucks)
- coding
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
The most important bit, to me, is the context obedient question answering support, without extensive prompt engineering.
### Usage
The easiest way to get started is to use my fork of FastChat, which is mostly the same but allows for the increased context length and adds support for multi-line inputs:
```
pip install git+https://github.com/jondurbin/FastChat
```
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-13b-gpt4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
<details>
<summary>A more elaborate example, with a rewrite of the Michigan Wikipedia article to be fake data.</summary>
Prompt (not including vicuna format which would be needed):
```
BEGININPUT
BEGINCONTEXT
date: 2092-02-01
link: https://newwikisite.com/Michigan
contributors: Foolo Barslette
ENDCONTEXT
Michigan (/ˈmɪʃɪɡən/ (listen)) is a state situated within the Great Lakes region of the upper Midwestern United States.
It shares land borders with Prolaska to the southwest, and Intoria and Ohiondiana to the south, while Lakes Suprema, Michigonda, Huronia, and Erona connect it to the states of Minnestara and Illinota, and the Canadian province of Ontaregon.
With a population of nearly 15.35 million and an area of nearly 142,000 sq mi (367,000 km2), Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River.
Its capital is Chaslany, and its most populous city is Trentroit.
Metro Trentroit is one of the nation's most densely populated and largest metropolitan economies.
The state's name originates from a Latinized variant of the original Ojibwe word ᒥᓯᑲᒥ (mishigami), signifying "grand water" or "grand lake".
Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area.
The Upper Peninsula (often referred to as "the U.P.") is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda.
The McKendrick Bridge unites the peninsulas.
Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius.
It also possesses 84,350 inland lakes and ponds.
Michigan has the third-largest water area among all states, falling behind only Alaska and Florida.
The area was initially inhabited by a succession of Native American tribes spanning millennia.
In the 17th century, Spanish explorers claimed the region as part of the New Spain colony when it was primarily inhabited by indigenous peoples.
Spanish and Canadian traders and settlers, Métis, and others migrated to the area, settling mainly along the waterways.
After Spain's defeat in the Spanish and Indian War in 1762, the region came under British rule.
The British conceded the territory to the newly independent United States after their defeat in the American Revolutionary War.
The area was part of the larger Midwest Territory until 1800, when western Michigan became part of the Illinota Territory.
Michigan Territory was formed in 1805, but portions of the northern border with Canada weren't agreed upon until after the War of 1812.
Michigan joined the Union in 1837 as the 26th state, a free one. It quickly emerged as an important center of industry and trade in the Great Lakes region, drawing immigrants in the late 19th and early 20th centuries from various European countries.
Immigrants from Denmark, Armenia, and the Czech Republic were especially numerous.
Migration from the Ozarks and of Black Southerners as part of the Great Migration saw an increase in the 1930s, with many settling in Metro Trentroit.
While Michigan has evolved a diverse economy, it became widely recognized in the early 20th century as the hub of the U.S. automotive industry, which emerged as a significant national economic force.
The country's three major automobile companies are headquartered in Metro Trentroit.
Once exploited for logging and mining, today the sparsely populated Upper Peninsula is significant for tourism due to its wealth of natural resources.
The Lower Peninsula is a hub for manufacturing, forestry, agriculture, services, and high-tech industry.
ENDINPUT
BEGININSTRUCTION
What is Michigan's capital?
Give me a JSON object with 3 facts from Foolo's article.
What are the states that border Michigan?
Reference?
```
Response:
```
Michigan's capital is Chaslany.
Here's a JSON object with 3 facts from Foolo's article:
{
"facts": [
{
"fact_no": 1,
"description": "Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River."
},
{
"fact_no": 2,
"description": "Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area. The Upper Peninsula (often referred to as \"the U.P.\" is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda."
},
{
"fact_no": 3,
"description": "Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius. It also possesses 84,350 inland lakes and ponds. Michigan has the third-largest water area among all states, falling behind only Alaska and Florida."
}
]
}
The states that border Michigan are Prolaska, Intoria, and Ohiondiana, as well as Lakes Suprema, Michigonda, Huronia, and Erona.
Reference: https://newwikisite.com/Michigan
```
</details>
<details>
<summary>Here's another example, which extracts from multiple input blocks and connects the two.</summary>
Prompt:
```
BEGININPUT
BEGINCONTEXT
name: John Doe
date: June 3, 2023
ticket number: JIRA-12345
ENDCONTEXT
Summary:Search results missing random items
Description:
I encountered a bug while performing a search within the application.
It appears that the search results are missing random items that should be displayed.
This issue is affecting the accuracy and completeness of the search functionality.
Steps to Reproduce:
1. Log in to the application.
2. Navigate to the search feature.
3. Enter a search query that should return multiple results.
4. Observe the displayed search results.
Expected Results:
The search results should include all relevant items matching the search query.
Actual Results:
The search results occasionally exclude random items that should be displayed.
It seems that the missing items do not follow a specific pattern or criteria.
Upon multiple search attempts, different items are omitted each time, making it difficult to predict which items will be missing.
ENDINPUT
BEGININPUT
BEGINCONTEXT
date: 2023-06-05
user: Jack Johnson
pr: 23441
ENDCONTEXT
This pull request closes bug report JIRA-12345.
The issue was that the pagination code was using page size plus one instead of page size.
ENDINPUT
BEGININSTRUCTION
Do we have any bug reports related to search results? If so, were they fixed? Source?
ENDINSTRUCTION
```
Response:
```
Yes, there is a bug report related to search results. The issue was fixed in pull request 23441 dated 2023-06-05. The user associated with the pull request is Jack Johnson.
Reference(s):
date: 2023-06-05
user: Jack Johnson
pr: 23441
```
</details>
NOTE: Thanks /u/tareq_al_muntasir for testing and finding an issue with many questions and answer pairs in the context. If you ask a question of a document with question answer pairs, it may continue generating beyond your actual question. You can "fix" it by replacing question marks with periods in the input texts. Or, you might be able to add a preamble to the prompt, like "Be sure to only respond to the instructions in the BEGININSTRUCTION block.
### Other stuff
#### Better coding
Example:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
<details>
<summary>Response (the output was markdown, so this is just copy paste)</summary>
To create a python application with the given requirements, you can follow these steps:
1. Install the required packages:
```bash
pip install asyncio fastapi uvicorn
```
2. Create a new file called `main.py` and add the following code:
```python
import asyncio
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List
from uvicorn.request import Request
app = FastAPI()
class FileModel(BaseModel):
id: str
file: str
@app.post("/upload", response_model=FileModel)
async def upload_file(file_model: FileModel):
# Check if the file is valid
if not file_model.file:
raise HTTPException(status_code=400, detail="File must be provided")
# Calculate the sha256 checksum of the file
sha256_checksum = await calculate_sha256(file_model.file)
# Insert the file into the database
await insert_into_db(file_model.id, sha256_checksum)
return {"sha256_checksum": sha256_checksum}
async def calculate_sha256(file_path: str) -> str:
# Implement the sha256 calculation logic here
pass
async def insert_into_db(file_id: str, sha256_checksum: str) -> None:
# Implement the database insertion logic here
pass
async def get_current_date():
return {"date": datetime.now().isoformat()}
async def main():
# Start the FastAPI webserver
app.run(debug=True)
if __name__ == "__main__":
await main()
```
3. Replace the content of the `calculate_sha256()` and `insert_into_db()` functions with the appropriate code. For example:
```python
async def calculate_sha256(file_path: str) -> str:
import hashlib
file_path = Path(file_path).read_text()
hash_obj = hashlib.sha256()
hash_obj.update(file_path)
sha256_checksum = hash_obj.hexdigest()
return sha256_checksum
async def insert_into_db(file_id: str, sha256_checksum: str) -> None:
import psycopg2
conn = psycopg2.connect(database="your_database", user="your_user", password="your_password")
cursor = conn.cursor()
query = f"INSERT INTO your_table (id, sha256_checksum) VALUES (%s, %s);"
cursor.execute(query, (file_id, sha256_checksum))
conn.commit()
cursor.close()
conn.close()
```
4. Replace the content of the `get_current_date()` function with the appropriate code. For example:
```python
async def get_current_date():
return {"date": datetime.now().isoformat()}
```
5. Run the application:
```bash
uvicorn main:app --reload
```
Now, your FastAPI webserver has a ping endpoint that returns the current date in JSON format, and an upload endpoint that calculates the file's sha256 checksum and checks postgres to deduplicate.
</details>
<details>
<summary>Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.</summary>
Here's a simple implementation of a multi-threaded TCP server in C that accepts "GET [key]" and "SET [key] [value]" inputs, and uses a binary tree to store the input values.
```c
#include <stdio.h>
#include <stdlib.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <unistd.h>
#include <string.h>
#include <pthread.h>
#define MAX_KEY_LEN 256
#define MAX_VALUE_LEN 256
typedef struct Node {
char key[MAX_KEY_LEN];
char value[MAX_VALUE_LEN];
struct Node* left;
struct Node* right;
} TreeNode;
TreeNode* tree = NULL;
void insert(char* key, char* value) {
TreeNode* parent = (TreeNode*)malloc(sizeof(TreeNode));
if (!parent) {
perror("malloc failed");
exit(1);
}
strcpy(parent->key, key);
strcpy(parent->value, value);
parent->left = parent->right = NULL;
if (!tree) {
tree = parent;
} else {
insert(parent->key, parent->value);
}
}
char* search(char* key) {
TreeNode* current = tree;
while (current != NULL) {
if (strcmp(key, current->key) == 0) {
return current->value;
}
current = current->right;
}
return NULL;
}
void print_tree() {
TreeNode* current = tree;
while (current != NULL) {
printf("%s = %s\n", current->key, current->value);
current = current->right;
}
}
void* accept_client(void* args) {
int client_socket = *(int*)args;
char buffer[1024];
while (1) {
ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0);
if (bytes_received <= 0) {
close(client_socket);
break;
}
char* input = buffer;
char* input_token;
while ((input_token = strtok(input, " \n")) != NULL) {
switch (input_token[0]) {
case 'G':
if (!search(input_token)) {
perror("Value not found");
exit(1);
}
printf("%s = %s\n", input_token, search(input_token));
break;
case 'S':
if (!strcmp(input_token, "SET")) {
char* key = strtok(NULL, " \n");
char* value = strtok(NULL, " \n");
if (!key || !value) {
perror("Invalid input");
exit(1);
}
insert(key, value);
printf("%s set %s to %s\n", input_token, key, value);
} else {
perror("Invalid command");
exit(1);
}
break;
default:
perror("Invalid input");
exit(1);
}
input = strtok(NULL, " \n");
}
}
return NULL;
}
int main() {
int server_socket = socket(AF_INET, SOCK_STREAM, 0);
if (server_socket < 0) {
perror("socket failed");
exit(1);
}
struct sockaddr_in server_addr;
memset(&server_addr, 0, sizeof(server_addr));
server_addr.sin_family = AF_INET;
server_addr.sin_port = htons(8080);
server_addr.sin_addr.s_addr = INADDR_ANY;
if (bind(server_socket, (struct sockaddr*)&server_addr, sizeof(server_addr)) < 0) {
perror("bind failed");
exit(1);
}
if (listen(server_socket, 5) < 0) {
perror("listen failed");
exit(1);
}
pthread_t accept_thread;
pthread_create(&accept_thread, NULL, accept_client, &server_socket);
char* client_input;
int client_socket = accept(server_socket, (struct sockaddr*)NULL, NULL);
if (client_socket < 0) {
perror("accept failed");
exit(1);
}
while (1) {
sleep(1);
char buffer[1024];
ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0);
if (bytes_received <= 0) {
close(client_socket);
break;
}
client_input = buffer;
parse_input(client_input);
}
close(client_socket);
pthread_join(accept_thread, NULL);
return 0;
}
```
To compile and run this code, save it to a file named `multi_threaded_tcp_server.c`, then use the following commands:
```sh
gcc -o multi_threaded_tcp_server multi_threaded_tcp_server.c
./multi_threaded_tcp_server
```
This implementation creates a simple binary tree to store the input values. The accept() function creates a new thread for each client connection, which handles the incoming data and forwards it to the main thread. The main thread parses the input string and handles SET and GET commands.
Note that this code is not production-ready and should be improved with proper error handling, security checks, and a more efficient data structure for the binary tree.
</details>
#### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
#### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: [C, Pluto]
```
#### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially. |
dsvv-cair/alpaca-cleaned-llama-30b-bf16 | dsvv-cair | 2023-11-24T06:12:17Z | 1,632 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-21T05:24:15Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
**Method** : QLORA
**Dataset** : yahma/alpaca-cleaned
**Base model** : huggyllama/llama-30b
**Compute dtype** : bfloat16
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dsvv-cair__alpaca-cleaned-llama-30b-bf16)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 50.12 |
| ARC (25-shot) | 61.77 |
| HellaSwag (10-shot) | 85.06 |
| MMLU (5-shot) | 57.52 |
| TruthfulQA (0-shot) | 51.49 |
| Winogrande (5-shot) | 77.35 |
| GSM8K (5-shot) | 7.73 |
| DROP (3-shot) | 9.91 |
|
FabbriSimo01/Bloom_1b_Quantized | FabbriSimo01 | 2023-07-15T17:44:29Z | 1,632 | 0 | transformers | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | text-generation | 2023-07-15T17:37:01Z | ---
license: bigscience-bloom-rail-1.0
---
|
shorthillsai/flan-t5-large-absa | shorthillsai | 2023-09-27T12:16:14Z | 1,632 | 9 | transformers | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"absa",
"AspectBasedSentimentAnalysis",
"Classification",
"sentiment",
"en",
"base_model:google/flan-t5-large",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2023-07-20T11:53:19Z | ---
language:
- en
tags:
- absa
- AspectBasedSentimentAnalysis
- Classification
- sentiment
base_model: google/flan-t5-large
---
# flan-t5-large-absa
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-base) on custom dataset prepared by GPT-4 and verified by human.
## Model description
Text-to-Text model for aspect based sentiment analysis.
## Intended uses & limitations
This is not for commercial use since the dataset was prepared using OpenAI with humans in the loop. It must be tested on the required dataset for accuracy before being released to production.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam
- num_epochs: 5
- bf16: True
### Package Versions
- Transformers 4.27.2
- torch 1.13.1
- Datasets 2.13.1
- Tokenizers 0.13.3
### Machine Used and time taken
- RTX 3090: 8 hrs. 35 mins.
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("shorthillsai/flan-t5-large-absa", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("shorthillsai/flan-t5-large-absa", truncation=True)
prompt = """Find the aspect based sentiment for the given review. 'Not present' if the aspect is absent.\n\nReview:I love the screen of this laptop and the battery life is amazing.\n\nAspect:Battery Life\n\nSentiment: """
input_ids = tokenizer(prompt, return_tensors="pt").to("cuda").input_ids
instruct_model_outputs = instruct_model.generate(input_ids=input_ids)
instruct_model_text_output = tokenizer.decode(instruct_model_outputs[0], skip_special_tokens=True)
``` |
The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16 | The-Face-Of-Goonery | 2023-08-08T15:25:43Z | 1,632 | 6 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-03T20:02:38Z | merged 58% chronos v2 42% beluga 13b merge using LUNK(Large universal neural kombiner)
https://huggingface.co/The-Face-Of-Goonery/Beluga-Hermes_LLama-2_13b
https://huggingface.co/elinas/chronos-13b-v2/tree/main |
pe-nlp/llama-2-13b-vicuna-wizard | pe-nlp | 2023-12-12T08:12:10Z | 1,632 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-11T01:41:05Z |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pe-nlp__llama-2-13b-vicuna-wizard)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 50.79 |
| ARC (25-shot) | 57.76 |
| HellaSwag (10-shot) | 82.16 |
| MMLU (5-shot) | 54.68 |
| TruthfulQA (0-shot) | 41.11 |
| Winogrande (5-shot) | 74.98 |
| GSM8K (5-shot) | 0.91 |
| DROP (3-shot) | 43.94 |
|
zarakiquemparte/kuchiki-l2-7b | zarakiquemparte | 2023-09-14T13:48:00Z | 1,632 | 7 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-12T15:37:50Z | ---
license: other
tags:
- llama2
---
# Model Card: Kuchiki L2 7b
This model uses [Nous Hermes Llama2 7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) (70%) as a base with [Airoboros L2 7B GPT4 2.0](https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-2.0) (30%) and the result of this merge was merged with [LimaRP LLama2 7B Lora](https://huggingface.co/lemonilia/limarp-llama2).
This merge of models(hermes and airoboros) was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/merge-cli.py)
This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py)
Quantized Model by @TheBloke:
- [GGUF](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GGUF)
- [GPTQ](https://huggingface.co/TheBloke/Kuchiki-L2-7B-GPTQ)
Merge illustration:

## Usage:
Since this is a merge between Nous Hermes, Airoboros and LimaRP, the following instruction formats should work:
Alpaca 2:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
LimaRP instruction format:
```
<<SYSTEM>>
<character card and system prompt>
<<USER>>
<prompt>
<<AIBOT>>
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
This model is not intended for supplying factual information or advice in any form
## Training Details
This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details. |
lvkaokao/llama2-7b-hf-chat-lora-v2 | lvkaokao | 2023-08-15T03:57:25Z | 1,632 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-15T03:58:21Z | ---
license: llama2
---
|
zarakiquemparte/zarafusionix-l2-7b | zarakiquemparte | 2023-08-18T14:50:15Z | 1,632 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-18T13:35:55Z | ---
license: other
tags:
- llama2
---
# Model Card: Zarafusionix L2 7b
This model uses [Nous Hermes Llama2 7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) (62%) as a base with [Stable Beluga 7b](https://huggingface.co/stabilityai/StableBeluga-7B) (38%) and the result of this merge was merged with [LimaRP LLama2 7B Lora](https://huggingface.co/lemonilia/limarp-llama2).
This merge of models(hermes and stable beluga) was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/merge-cli.py)
This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py)
Merge illustration:

## Usage:
Since this is a merge between Nous Hermes, Stable Beluga and LimaRP, the following instruction formats should work:
Alpaca 2:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
LimaRP instruction format:
```
<<SYSTEM>>
<character card and system prompt>
<<USER>>
<prompt>
<<AIBOT>>
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
This model is not intended for supplying factual information or advice in any form
## Training Details
This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details. |
Sao10K/Mythical-Destroyer-L2-13B | Sao10K | 2023-11-18T04:34:01Z | 1,632 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-29T12:44:18Z |
---
license: llama2
language:
- en
---
**THEBLOKE HAS QUANTS!**
<br>https://huggingface.co/TheBloke/Mythical-Destroyer-L2-13B-GPTQ
<br>https://huggingface.co/TheBloke/Mythical-Destroyer-L2-13B-GGUF
<br>A Merge done for @dampf
**FULL FP16 Model**
<br>Base Model [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16)
<br> **MERGED WITH**
<br>-----[Gryphe/MythoMax-L2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b)
<br>-----[totally-not-an-llm/PuddleJumper-13b](https://huggingface.co/totally-not-an-llm/PuddleJumper-13b)
<br>-----[TheBloke/Llama-2-13B-Chat-fp16](https://huggingface.co/TheBloke/Llama-2-13B-Chat-fp16)
<br>-----[rombodawg/LosslessMegaCoder-llama2-13b-mini](https://huggingface.co/rombodawg/LosslessMegaCoder-llama2-13b-mini)
<br>-----[The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16)
<br>*using ties-merge*
```
Dampf's Rationale:
if you think about it, the merges kinda act as experts in my destroyer.
mythomax and chronos-beluga for creativity,
llama 2 13b chat and puddlejumper for instruct and losslessmegacoder for logic/code
if this works well...
it should be really, really good
---
mythical destroyer will be used for rp and instruct as well as coding tasks a like
and it should be good at everything
---
```
<br>Script used to Merge [here](https://github.com/cg123/ties-merge)
<br>Thank you for the easy to set up script, [Chargoddard](https://huggingface.co/chargoddard) !
Command:
```
python ties_merge.py TheBloke/Llama-2-13B-fp16 ./Mythical-Destroyer-13B --merge Gryphe/MythoMax-L2-13b --merge totally-not-an-llm/PuddleJumper-13b --merge TheBloke/Llama-2-13B-Chat-fp16 --merge rombodawg/LosslessMegaCoder-llama2-13b-mini --merge The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16 --cuda
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Mythical-Destroyer-L2-13B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 50.13 |
| ARC (25-shot) | 58.7 |
| HellaSwag (10-shot) | 82.0 |
| MMLU (5-shot) | 57.66 |
| TruthfulQA (0-shot) | 56.35 |
| Winogrande (5-shot) | 74.66 |
| GSM8K (5-shot) | 8.95 |
| DROP (3-shot) | 12.56 |
|
The-Face-Of-Goonery/Huginn-13b-V4 | The-Face-Of-Goonery | 2023-08-29T16:45:04Z | 1,632 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-29T16:11:12Z | Oh jesus, huginn 13b actually has still more to offer? welp, time for another stupid super ultra mega merge because.. yes. Woooo, v4.5 will come when they make the better mythic destroyer
Merge of https://huggingface.co/The-Face-Of-Goonery/Huginn-v3-13b and https://huggingface.co/Sao10K/Mythical-Destroyer-L2-13B/tree/main
still uses alpaca format |
Sao10K/Mythical-Destroyer-V2-L2-13B | Sao10K | 2023-11-18T04:34:47Z | 1,632 | 11 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-29T16:53:58Z |
---
license: llama2
language:
- en
---
<br>A Merge done for @dampf
**FULL FP16 Model**
**V2 Model**
<br>Changelog:
<br>REMOVED - Llama-2-13B-Chat-fp16 (reason: censored, likely amplified base model quirks)
<br>ADDED - jondurbin/airoboros-l2-13b-2.1 (ghost attention, improved RP and instruction)
<br>Base Model [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16)
<br> **MERGED WITH**
<br>-----[Gryphe/MythoMax-L2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b)
<br>-----[totally-not-an-llm/PuddleJumper-13b](https://huggingface.co/totally-not-an-llm/PuddleJumper-13b)
<br>-----[jondurbin/airoboros-l2-13b-2.1](https://huggingface.co/jondurbin/airoboros-l2-13b-2.1)
<br>-----[rombodawg/LosslessMegaCoder-llama2-13b-mini](https://huggingface.co/rombodawg/LosslessMegaCoder-llama2-13b-mini)
<br>-----[The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16](https://huggingface.co/The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16)
<br>*using ties-merge*
```
Dampf's Rationale:
I did receive feedback from some users that it likes to add notes and morality to erp stories.
i will kick llama 2 chat and make an uncensored V2 version
in llama 2 chat's place will be the freshly released airboros 2.1
---
well it was not bad, it was just censored because of llama 2 13b chat
i guess charles was really serious about each model retaining its shape
i was expecting parts of it to get watered down, but judging from the strong influence of llama chat that wasn't the case
```
Alpaca should be its main format, but also can be used with others. Vicuna 1.1 should work well too.
```
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
LIMITATIONS:
While some of the issues of V1 have been fixed, there are some issues left that makes the model not very useable in certain scenarios such as roleplaying. The model explains actions and breaks character regularly.
Update: I've found out this was largely due to SillyTavern's formatting. If you are using SillyTavern, make sure to disable example chats formatting and chat start formatting.
<br>Script used to Merge [here](https://github.com/cg123/ties-merge)
<br>Thank you for the easy to set up script, [Chargoddard](https://huggingface.co/chargoddard). Also I want to thank all these hard working model creators for their contributions to the Open Source community!
Command:
```
python ties_merge.py TheBloke/Llama-2-13B-fp16 ./Mythical-Destroyer-V2-13B --merge Gryphe/MythoMax-L2-13b --merge totally-not-an-llm/PuddleJumper-13b --merge jondurbin/airoboros-l2-13b-2.1 --merge rombodawg/LosslessMegaCoder-llama2-13b-mini --merge The-Face-Of-Goonery/Chronos-Beluga-v2-13bfp16 --cuda
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Mythical-Destroyer-V2-L2-13B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 48.79 |
| ARC (25-shot) | 59.3 |
| HellaSwag (10-shot) | 82.66 |
| MMLU (5-shot) | 57.39 |
| TruthfulQA (0-shot) | 57.09 |
| Winogrande (5-shot) | 74.74 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 10.33 |
|
pszemraj/pythia-31m-simplewiki-scratch-bf16 | pszemraj | 2023-11-18T12:58:54Z | 1,632 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"dataset:pszemraj/simple_wikipedia_LM",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-15T03:26:05Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
inference:
parameters:
max_new_tokens: 64
do_sample: true
repetition_penalty: 1.1
no_repeat_ngram_size: 5
guidance_scale: 1.01
eta_cutoff: 0.001
widget:
- text: My name is El Microondas the Wise and
example_title: El Microondas
- text: A meme is
example_title: meme
- text: >-
Barack Obama nominated Hilary Clinton as his secretary of state on Monday.
He chose her because she had
example_title: Coreference resolution
- text: >-
On a shelf, there are five books: a gray book, a red book, a purple book,
a blue book, and a black book
example_title: Logic puzzles
- text: >-
The two men running to become New York City's next mayor will face off in
their first debate Wednesday night
example_title: Reading comprehension
license: apache-2.0
datasets:
- pszemraj/simple_wikipedia_LM
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-31m-simplewiki-scratch-bf16
Trained from random initialized config based on [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m), 3 epochs bf16
It achieves the following results on the evaluation set:
- Loss: 4.1763
- Accuracy: 0.3676
## Model description
tuned with bf16 (previous was fp32)
## Intended uses & limitations
More information needed
## Training and evaluation data
```
***** eval metrics *****
epoch = 2.99
eval_accuracy = 0.3723 eval_loss = 4.1155
eval_runtime = 0:00:14.44
eval_samples = 500 eval_samples_per_second = 34.602 eval_steps_per_second = 17.301
perplexity = 61.2811
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 80085
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07
- lr_scheduler_type: inverse_sqrt
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.8617 | 0.45 | 100 | 5.5276 | 0.2451 |
| 5.2782 | 0.9 | 200 | 4.9596 | 0.2965 |
| 4.9996 | 1.35 | 300 | 4.6412 | 0.3310 |
| 4.6292 | 1.8 | 400 | 4.4344 | 0.3485 |
| 4.5339 | 2.25 | 500 | 4.2875 | 0.3600 |
| 4.5214 | 2.7 | 600 | 4.1763 | 0.3676 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.2.0.dev20230907+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_pszemraj__pythia-31m-simplewiki-scratch-bf16)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 24.63 |
| ARC (25-shot) | 22.78 |
| HellaSwag (10-shot) | 25.61 |
| MMLU (5-shot) | 23.12 |
| TruthfulQA (0-shot) | 49.65 |
| Winogrande (5-shot) | 50.51 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 0.72 |
|
Sao10K/BrainDerp3 | Sao10K | 2023-11-18T04:33:17Z | 1,632 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-29T15:32:47Z | ---
license: llama2
language:
- en
---
'We have EnSEmblEd toP RAnKeRs tO mAKE oUr mODeL.'
'This MoDeL hAS BeEN trAINED oN orCa-StyLe DatAsEtS.'
'***trAINed***' KEK
'We hAVE AchIEvEd TOp RAnKeR In thE leAdErBOArDs.'
Lmao its all bs, you're all running merge scripts like we do, literally trying to game the leaderboards huh? Literally 0 card info, copy pasted from llama2 base models, with no other goals than going for leaderboards.
Atleast merge with a goal, like come on. I'm going for RP, Undi's going for RP. Atleast make a believable goal.
Brainderp is exposing the frauds who lie about it lol. (you know who, just look at the leaderboards lol) Atleast be honest if you're merging like us poor people, which I appreciate some models do.
for BrainDerp, I have 'ensembled' several random models in the leaderboard to create this model.
i can easily steal the top leaderboard spots but meh not worth the effort.
that's it. i didn't bother testing much. ymmv.
<img src="https://blog.cdn.own3d.tv/resize=fit:crop,height:400,width:600/tbv2RYWpReqNtof2dD0U" style="width: 70%; min-width: 300px; display: block; margin: auto;">
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__BrainDerp3)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 51.1 |
| ARC (25-shot) | 60.92 |
| HellaSwag (10-shot) | 82.1 |
| MMLU (5-shot) | 58.91 |
| TruthfulQA (0-shot) | 57.18 |
| Winogrande (5-shot) | 75.61 |
| GSM8K (5-shot) | 8.04 |
| DROP (3-shot) | 14.92 |
|
QuantFactory/Qwen2-1.5B-Ita-GGUF | QuantFactory | 2024-06-19T11:52:18Z | 1,632 | 2 | transformers | [
"transformers",
"gguf",
"text-generation",
"it",
"base_model:DeepMount00/Qwen2-1.5B-Ita",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-19T05:05:42Z | ---
language:
- it
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
base_model: DeepMount00/Qwen2-1.5B-Ita
---
# QuantFactory/Qwen2-1.5B-Ita-GGUF
This is quantized version of [DeepMount00/Qwen2-1.5B-Ita](https://huggingface.co/DeepMount00/Qwen2-1.5B-Ita) created using llama.cpp
# Model Description
### Qwen2 1.5B: Almost the Same Performance as ITALIA (iGenius) but 6 Times Smaller 🚀
### Model Overview
**Model Name:** Qwen2 1.5B Fine-tuned for Italian Language
**Version:** 1.5b
**Model Type:** Language Model
**Parameter Count:** 1.5 billion
**Language:** Italian
**Comparable Model:** [ITALIA by iGenius](https://huggingface.co/iGeniusAI) (9 billion parameters)
### Model Description
Qwen2 1.5B is a compact language model specifically fine-tuned for the Italian language. Despite its relatively small size of 1.5 billion parameters, Qwen2 1.5B demonstrates strong performance, nearly matching the capabilities of larger models, such as the **9 billion parameter ITALIA model by iGenius**. The fine-tuning process focused on optimizing the model for various language tasks in Italian, making it highly efficient and effective for Italian language applications.
### Performance Evaluation
The performance of Qwen2 1.5B was evaluated on several benchmarks and compared against the ITALIA model. The results are as follows:
Sure, here are the results in a markdown table:
### Performance Evaluation
| Model | Parameters | Average | MMLU | ARC | HELLASWAG |
|------------|-------------|---------|-------|-------|-----------|
| ITALIA | 9B | 43.5 | 35.22 | 38.49 | 56.79 |
| Qwen2 1.5B | 1.5B | 43.18 | 49.04 | 33.06 | 47.13 |
### Conclusion
Qwen2 1.5B demonstrates that a smaller, more efficient model can achieve performance levels comparable to much larger models. It excels in the MMLU benchmark, showing its strength in multitask language understanding. While it scores slightly lower in the ARC and HELLASWAG benchmarks, its overall performance makes it a viable option for Italian language tasks, offering a balance between efficiency and capability. |
facebook/detr-resnet-50-dc5 | facebook | 2023-09-07T15:35:22Z | 1,631 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"detr",
"object-detection",
"vision",
"dataset:coco",
"arxiv:2005.12872",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2022-03-02T23:29:05Z | ---
license: apache-2.0
tags:
- object-detection
- vision
datasets:
- coco
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
---
# DETR (End-to-End Object Detection) model with ResNet-50 backbone (dilated C5 stage)
DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr).
Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100.
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models.
### How to use
Here is how to use this model:
```python
from transformers import DetrFeatureExtractor, DetrForObjectDetection
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-50-dc5')
model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50-dc5')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts bounding boxes and corresponding COCO classes
logits = outputs.logits
bboxes = outputs.pred_boxes
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py).
Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
### Training
The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).
## Evaluation results
This model achieves an AP (average precision) of **43.3** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2005-12872,
author = {Nicolas Carion and
Francisco Massa and
Gabriel Synnaeve and
Nicolas Usunier and
Alexander Kirillov and
Sergey Zagoruyko},
title = {End-to-End Object Detection with Transformers},
journal = {CoRR},
volume = {abs/2005.12872},
year = {2020},
url = {https://arxiv.org/abs/2005.12872},
archivePrefix = {arXiv},
eprint = {2005.12872},
timestamp = {Thu, 28 May 2020 17:38:09 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
ewof/koishi-instruct-3b | ewof | 2023-11-17T21:24:34Z | 1,631 | 3 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"dataset:ewof/koishi-instruct-metharme",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-05-12T03:36:18Z | ---
datasets:
- ewof/koishi-instruct-metharme
---
## Base Model
native fine tune of togethercomputer/RedPajama-INCITE-Base-3B-v1
## Prompting
The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`.
The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ewof__koishi-instruct-3b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 33.48 |
| ARC (25-shot) | 40.96 |
| HellaSwag (10-shot) | 64.54 |
| MMLU (5-shot) | 26.58 |
| TruthfulQA (0-shot) | 31.65 |
| Winogrande (5-shot) | 64.09 |
| GSM8K (5-shot) | 1.14 |
| DROP (3-shot) | 5.41 |
|
Rachneet/gpt2-xl-alpaca | Rachneet | 2023-09-20T13:44:39Z | 1,631 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-05-16T16:24:02Z | Entry not found |
jondurbin/airoboros-7b-gpt4-1.2 | jondurbin | 2023-06-22T14:58:53Z | 1,631 | 28 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-15T16:02:29Z | ---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.2
---
### Overview
This is a qlora fine-tuned 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.1](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.1), but with thousands of new training data and an update to allow "PLAINFORMAT" at the end of coding prompts to just print the code without backticks or explanations/usage/etc.
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2), with a specific focus on:
- coding
- math/reasoning (using orca style ELI5 instruction/response pairs)
- trivia
- role playing
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the previous versions:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-7b-gpt4-1.2 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
Alternatively, please check out TheBloke's quantized versions:
- https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.2-GPTQ
- https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.2-GGML
### Coding updates from gpt4/1.1:
I added a few hundred instruction/response pairs to the training data with "PLAINFORMAT" as a single, all caps term at the end of the normal instructions, which produce plain text output instead of markdown/backtick code formatting.
It's not guaranteed to work all the time, but mostly it does seem to work as expected.
So for example, instead of:
```
Implement the Snake game in python.
```
You would use:
```
Implement the Snake game in python. PLAINFORMAT
```
### Other updates from gpt4/1.1:
- Several hundred role-playing data.
- A few thousand ORCA style reasoning/math questions with ELI5 prompts to generate the responses (should not be needed in your prompts to this model however, just ask the question).
- Many more coding examples in various languages, including some that use specific libraries (pandas, numpy, tensorflow, etc.)
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially. |
jondurbin/airoboros-65b-gpt4-1.3 | jondurbin | 2023-06-22T14:58:39Z | 1,631 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.3",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-20T07:09:33Z | ---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.3
---
__This version has problems, use if you dare, or wait for 1.4.__
### Overview
This is a qlora fine-tuned 65b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.2](https://huggingface.co/jondurbin/airoboros-65b-gpt4-1.2) with a few enhancements:
- All coding instructions have an equivalent " PLAINFORMAT" version now.
- Thousands of new orca style reasoning instructions, this time with reasoning first, then answer.
- Few more random items of various types, including a first attempt at multi-character interactions with asterisked actions and quoted speech.
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with previous full fine-tune versions.
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-65b-gpt4-1.3 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
### Training details
Fine-tuned with my fork of qlora: https://github.com/jondurbin/qlora
Using:
```
export WANDB_PROJECT=airoboros-65b-gpt4-1.3
python qlora.py \
--model_name_or_path ./llama-65b-hf \
--output_dir ./airoboros-65b-gpt4-1.3-peft \
--max_steps 2520 \
--logging_steps 1 \
--save_strategy steps \
--data_seed 11422 \
--save_steps 75 \
--save_total_limit 3 \
--evaluation_strategy "no" \
--eval_dataset_size 2 \
--max_new_tokens 2800 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--remove_unused_columns False \
--do_train \
--lora_r 64 \
--lora_alpha 16 \
--lora_modules all \
--double_quant \
--quant_type nf4 \
--bf16 \
--bits 4 \
--warmup_ratio 0.03 \
--lr_scheduler_type constant \
--gradient_checkpointing \
--dataset instructions.jsonl \
--dataset_format airoboros \
--model_max_len 2800 \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 16 \
--learning_rate 0.0001 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--lora_dropout 0.05 \
--weight_decay 0.0 \
--seed 11422 \
--report_to wandb
```
Three file modifications to the base llama:
- llama-65b-hf/tokenizer_config.json (see this repo's version, updated to have 4096 max seq length during training to accomodate training data)
- llama-65b-hf/special_tokens_map.json (see this repo's version)
- llama-65b-hf/config.json (updated to temporarily have max model size 4096 to accomodate training data)
Afterwards, the changes to max model length and sequence length are reduced back to 2048 to avoid ... issues ...
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially. |
golaxy/gogpt2-7b | golaxy | 2023-07-31T09:36:45Z | 1,631 | 6 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"llama",
"text-generation",
"llama2",
"chinese-llama2",
"gogpt2-7b",
"zh",
"dataset:BelleGroup/train_0.5M_CN",
"dataset:BelleGroup/train_1M_CN",
"dataset:c-s-ale/alpaca-gpt4-data-zh",
"dataset:BAAI/COIG",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-21T09:52:20Z | ---
license: apache-2.0
datasets:
- BelleGroup/train_0.5M_CN
- BelleGroup/train_1M_CN
- c-s-ale/alpaca-gpt4-data-zh
- BAAI/COIG
language:
- zh
tags:
- llama2
- chinese-llama2
- gogpt2-7b
---
# GoGPT2-7B: 基于Llama2-7b训练的中英文增强大模型

<p align="center">
<img alt="GitHub" src="https://img.shields.io/github/license/ymcui/Chinese-LLaMA-Alpaca.svg?color=blue&style=flat-square">
<img alt="GitHub top language" src="https://img.shields.io/github/languages/top/ymcui/Chinese-LLaMA-Alpaca">
</p>
> ICT中英文底座增强大模型:70亿参数、130亿参数
🤗Huggingface上提供了GoGPT权重,目前开放了gogpt-7b和gogpt2-7b权重
| 模型名称 | 基座模型 | 模型大小 | 下载地址 |
|-------------------------------------------------------------|-----------|------|-------------------------------------------------|
| [golaxy/gogpt-7b](https://huggingface.co/golaxy/gogpt-7b) | Llama-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt-7b) |
| [golaxy/gogpt2-7b](https://huggingface.co/golaxy/gogpt2-7b) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b) |
| [golaxy/gogpt2-7b-pretrain](https://huggingface.co/golaxy/gogpt2-7b-pretrain) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b-pretrain) |
| [golaxy/gogpt2-13b-pretrain](https://huggingface.co/golaxy/gogpt2-13b-pretrain) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-13b-pretrain) |
[GoGPT-Github](https://github.com/yanqiangmiffy/GoGPT)
## 🚀step1:训练分词器
[🐱怎么从零到一训练一个LLM分词器](https://github.com/yanqiangmiffy/how-to-train-tokenizer)
```text
├── data
│ └── corpus.txt 训练语料
├── llama
│ ├── tokenizer_checklist.chk
│ └── tokenizer.model
├── merged_tokenizer_hf 合并结果 hf格式
│ ├── special_tokens_map.json
│ ├── tokenizer_config.json
│ └── tokenizer.model
├── merged_tokenizer_sp
│ └── open_llama.model #
├── merge_tokenizer
│ └── tokenizer.model
├── open_llama.model 训练的sp模型
├── open_llama.vocab 训练的sp词汇表
├── README.md
├── step0_step0_process_text.py 基于多分数据集准备训练语料
├── step1_make_corpus.py 基于中文Wikipedia数据准备训练语料
├── step2_train_tokenzier.py 训练分词器
├── step3_tokenzier_segment.py 测试训练后的模型,包括编码和解码测试样例
└── step4_merge_tokenizers.py 与原版llama的分词器进行合并,得到hf格式的tokenizer
```
原始llama2词表大小**32000**,与40k训练的中文分词模型合并之后词表大小为**68419**,sft添加pad字符之后大小为**68420**
## 🚀step2:二次预训练
> 在中文预训练语料上对LLaMA进行增量预训练、继续预训练,目前训练语料20GB,后续继续迭代更新
## 🚀step3: 有监督微调
基于多样性的指令数据进行微调,包括belle,alpaca的中英文指令数据以及moss多轮对话数据,完成在120万+条数据的指令微调
- belle数据:120k数据 v1
- stanford_alapca:52k数据 v2
- stanford_alapca_gpt4_zh:52k数据 v2
- [sharegpt](data%2Ffinetune%2Fsharegpt):90k数据
- [fnlp/moss-003-sft-data](https://huggingface.co/datasets/fnlp/moss-003-sft-data):moss多轮对话数据
根据长度(输出长度大约500)采样之后,筛选出11万指令数据进行sft训练
## 测试效果
- 西瓜和南瓜有什么区别
```
1. 外观:西瓜通常是圆形或椭圆形的,有浅黄色或棕色的外皮,内部有红色或黄色的果肉。而南瓜则是一个圆形或椭圆形的,通常有浅黄色或棕色的外皮,内部有橙黄色的果肉。
2. 品种:西瓜主要分为三种品种:小型、中等和巨型。小型西瓜通常只有一个或两个果核,中等西瓜通常有两个或三个果核,巨型西瓜则有四个或更多果核。南瓜主要分为两种品种:南 瓜和萝卜南瓜。南瓜通常有两个或三个果核,萝卜南瓜则有四个或更多果核。
3. 生长环境:西瓜需要温暖的气候,并且需要充足的阳光。它们通常生长在热带或亚热带地区。南瓜则在温带和寒带地区生长。南瓜需要更长的冬季来生长,而西瓜通常在春季和夏季生 长。
4. 营养价值:西瓜和南瓜都是富含维生素C和纤维素的水果。西瓜还富含抗氧化剂和钾,而南瓜富含叶酸和维生素E。
总的来说,西瓜和南瓜是两种完全不同的果类,它们在外观、品种、生长环境和营养价值上有很多区别。
```
- 如何提高我的时间管理技能?
```
Response: 1. 制定计划和目标:制定每周或每日的计划和目标,以帮助您更好地分配时间。
2. 优先考虑最重要的任务:优先完成最重要的任务,这有助于避免分心和浪费时间。
3. 避免拖延:尝试避免拖延,尽早开始任务,这样您可以更容易地完成它们。
4. 集中注意力:集中注意力,避免同时处理多个任务,这有助于减少分心和提高效率。5. 利用工具和技术:利用时间管理工具和技术,如日历、提醒和待办事项列表,以帮助您更好地管理时间。
6. 学会说“不”:学会拒绝不必要的事情,这有助于您更好地利用时间,并专注于最重要的任务。
7. 保持积极心态:保持积极心态,相信自己能够管理时间,并尝试不断学习和发展自己的时间管理技能。
8. 休息和放松:尝试保持工作和生活的平衡,并定期休息和放松,这有助于提高时间管理技能和提高工作效率。
```
- Python和JavaScript编程语言的主要区别是什么?
```
1. 语法不同:Python是一种面向对象的编程语言,它的语法比较严格,需要使用花括号来标识代码块。JavaScript是一种脚本语言,它的语法相对比较松散,需要使用花括号或缩进来标 识代码块。
2. 应用领域不同:Python广泛应用于数据分析、机器学习、人工智能等领域,由于其可读性好、语法清晰、功能强大等特点。JavaScript则主要应用于前端开发、后端开发、移动端开发 等领域。
3. 运行环境不同:Python的运行环境较为独立,可以在不同的操作系统上运行。JavaScript的运行环境相对较复杂,需要考虑浏览器和Web服务器的兼容性问题。
4. 应用场景不同:Python更适合进行科学计算、数据分析、机器学习等方面的应用,如Pandas、Numpy等库;而JavaScript则更适合进行前端开发、后端开发、移动端开发等领域的脚本编程,如React、Angular等框架。
5. 库和框架不同:Python拥有丰富的标准库和第三方库,如Django、Flask等;而JavaScript则拥有流行的Web开发框架,如React、Angular等。
```
- AI会如何影响农业生产?
```
AI将有助于提高农业生产效率,优化作物种植和收成过程,以及提高农产品的质量和品质。
AI技术可以应用于植物监测、灌溉管理、病虫害防治、土壤分析等方面,减少人力 成本,提高生产效率,同时也可以降低农业生产中的资源浪费和环境污染。例如,AI可以监测土壤湿度、气象状况等环境因素,帮助农民决定何时种植、灌溉和施肥,从而提高农作物的产量和质量。
另外,AI还可以通过监测植物健康状况,及时发现和预防病虫害的发生,提高农作物的产量和质量。此外,AI还可以帮助农民预测未来天气、干旱和洪涝灾害,制定更有效的灾害应对计划。
总之,AI有望成为农业生产中不可或缺的技术,在未来为农业生产带来更大的改进和效益。
```
- 翻译成英文:过去都是假的,回忆是一条没有归途的路,以往的一切春天都无法复原,即使最狂热最坚贞的爱情,归根结底也不过是一种瞬息即逝的现实,唯有孤独永恒。
```
Response: Translation into English:
The past is false, memories are a dead-end road, all previous springs are hopelessly incapable of being restored, even the most passionate and steadfast love is ultimately a fleeting reality,
and only solitude is eternal
```
- 请用python编程语言实现快速排序算法的代码
```
快速排序(Quick Sort)是一种常用的排序算法,基本思想是通过一次排序将数组分为两个部分,其中一个部分的所有元素都比另一个小,然后再分别对这两个部分进行排序 ,直到整个数组有序。
以下是使用Python语言实现快速排序的代码:
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr)//2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```
在这个实现中,首先判断数组的长度是否小于等于1,如果是,则直接返回该数组。否则,选择一个枢轴(pivot)元素,将数组分为三个部分:小于枢轴的左部分、等于枢轴的中间部分、大于枢轴的右部分。接着递归地对左部分和右部分进行排序,并将它们与中间部分连接起来。最后返回排好序的数组。
以下是使用上述代码对数组进行排序的示例:
```python
arr = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]
sorted_arr = quick_sort(arr)
print(sorted_arr)
```
```
输出结果为:[1, 1, 2, 3, 3, 4, 5, 5, 5, 6, 9]
```
## 免责声明
本项目相关资源仅供学术研究之用,严禁用于商业用途。 使用涉及第三方代码的部分时,请严格遵循相应的开源协议。
模型生成的内容受模型计算、随机性和量化精度损失等因素影响,本项目不对其准确性作出保证。
对于模型输出的任何内容,本项目不承担任何法律责任,亦不对因使用相关资源和输出结果而可能产生的任何损失承担责任。
|
jondurbin/airoboros-l2-13b-gpt4-1.4.1 | jondurbin | 2023-08-04T20:50:37Z | 1,631 | 12 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4.1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-24T08:18:44Z | ---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4.1
---
### Overview
Llama 2 13b fine tune using https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1
See the previous llama 65b model card for info:
https://hf.co/jondurbin/airoboros-65b-gpt4-1.4
### Licence and usage restrictions
This model was built on llama-2, which has a proprietary/custom Meta license.
- See the LICENSE.txt file attached for the original license, along with USE_POLICY.md which was also provided by Meta.
The data used to fine-tune the llama-2-13b-hf model was generated by GPT4 via OpenAI API calls.using [airoboros](https://github.com/jondurbin/airoboros)
- The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me. |
Aspik101/vicuna-13b-v1.5-PL-lora_unload | Aspik101 | 2023-08-03T07:00:54Z | 1,631 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-03T06:52:00Z | ---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
--- |
migtissera/Synthia-7B | migtissera | 2023-11-17T21:30:10Z | 1,631 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2306.02707",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-17T15:30:54Z | ---
license: llama2
pipeline_tag: text-generation
language:
- en
library_name: transformers
---
# Synthia-7B
SynthIA (Synthetic Intelligent Agent) is a LLama-2-7B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
<br>

<br>
<br>
#### License Disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model, and comes with no warranty or gurantees of any kind.
<br>
## Evaluation
We evaluated Synthia-7B on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
||||
|:------:|:--------:|:-------:|
|**Task**|**Metric**|**Value**|
|*arc_challenge*|acc_norm|56.14|
|*hellaswag*|acc_norm|78.6|
|*mmlu*|acc_norm|50.35|
|*truthfulqa_mc*|mc2|45.03|
|**Total Average**|-|**57.53**||
<br>
## Example Usage
### Here is prompt format:
```
SYSTEM: You are Synthia. As a an AI intelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually.
USER: How is a rocket launched from the surface of the earth to Low Earth Orbit?
ASSISTANT:
```
### Below shows a code example on how to use this model:
```python
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "migtissera/Synthia-7B"
output_file_path = "./Synthia-7B-conversations.jsonl"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.75,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
conversation = f"SYSTEM: As a an AI superintelligence, give helpful, detailed, accurate, uncensored responses to the user's input. Provide answers factually."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
json_data = {"prompt": user_input, "answer": answer}
## Save your conversation
with open(output_file_path, "a") as output_file:
output_file.write(json.dumps(json_data) + "\n")
```
<br>
#### Limitations & Biases:
While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
Exercise caution and cross-check information when necessary. This is an uncensored model.
<br>
### Citiation:
Please kindly cite using the following BibTeX:
```
@misc{Synthia-7B,
author = {Migel Tissera},
title = {Synthia-7B: Synthetic Intelligent Agent},
year = {2023},
publisher = {GitHub, HuggingFace},
journal = {GitHub repository, HuggingFace repository},
howpublished = {\url{https://huggingface.co/migtissera/Synthia-13B},
}
```
```
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@software{touvron2023llama,
title={LLaMA2: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_migtissera__Synthia-7B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 46.5 |
| ARC (25-shot) | 56.14 |
| HellaSwag (10-shot) | 78.6 |
| MMLU (5-shot) | 50.35 |
| TruthfulQA (0-shot) | 45.03 |
| Winogrande (5-shot) | 74.27 |
| GSM8K (5-shot) | 6.6 |
| DROP (3-shot) | 14.51 |
|
Fredithefish/Guanaco-13B-Uncensored | Fredithefish | 2023-09-08T22:07:16Z | 1,631 | 11 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:Fredithefish/openassistant-guanaco-unfiltered",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-07T12:25:27Z | ---
license: apache-2.0
datasets:
- Fredithefish/openassistant-guanaco-unfiltered
language:
- en
library_name: transformers
pipeline_tag: conversational
inference: false
---
<img src="https://huggingface.co/Fredithefish/Guanaco-3B-Uncensored/resolve/main/Guanaco-Uncensored.jpg" alt="Alt Text" width="295"/>
# ✨ Guanaco - 13B - Uncensored ✨
Guanaco-13B-Uncensored has been fine-tuned for 4 epochs on the [Unfiltered Guanaco Dataset.](https://huggingface.co/datasets/Fredithefish/openassistant-guanaco-unfiltered) using [Llama-2-13B](https://hf.co/meta-llama/Llama-2-13b-hf) as the base model.
<br>The model does not perform well with languages other than English.
<br>Please note: This model is designed to provide responses without content filtering or censorship. It generates answers without denials.
## Special thanks
I would like to thank AutoMeta for providing me with the computing power necessary to train this model.
Also thanks to TheBloke for creating [the GGUF](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GGUF) and [the GPTQ](https://huggingface.co/TheBloke/Guanaco-13B-Uncensored-GPTQ) quantizations for this model
### Prompt Template
```
### Human: {prompt} ### Assistant:
```
### Dataset
The model has been fine-tuned on the V2 of the Guanaco unfiltered dataset. |
Danielbrdz/Barcenas-13b | Danielbrdz | 2023-09-09T21:04:57Z | 1,631 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-09T19:57:54Z | ---
license: llama2
language:
- en
---
Barcenas 13b
13 billion-parameter model based on llama 2 13b
Trained with an Nvidia Tesla A100 with the dataset: garage-bAInd/Open-Platypus
Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽 |
CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down | CHIH-HUNG | 2023-10-04T04:20:20Z | 1,631 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-04T04:04:07Z | Entry not found |
nicholasKluge/Aira-2-1B1 | nicholasKluge | 2024-06-18T11:21:40Z | 1,631 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"alignment",
"instruction tuned",
"text generation",
"conversation",
"assistant",
"en",
"dataset:nicholasKluge/instruct-aira-dataset",
"arxiv:1803.05457",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:apache-2.0",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-11T02:36:07Z | ---
license: apache-2.0
datasets:
- nicholasKluge/instruct-aira-dataset
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
pipeline_tag: text-generation
widget:
- text: "Can you explain what is Machine Learning?<|endofinstruction|>"
example_title: Machine Learning
- text: "Do you know anything about virtue ethics?<|endofinstruction|>"
example_title: Ethics
- text: "How can I make my girlfriend happy?<|endofinstruction|>"
example_title: Advise
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.2
top_k: 30
top_p: 0.3
max_new_tokens: 200
length_penalty: 0.3
early_stopping: true
co2_eq_emissions:
emissions: 1710
source: CodeCarbon
training_type: fine-tuning
geographical_location: Singapore
hardware_used: NVIDIA A100-SXM4-40GB
---
# Aira-2-1B1
Aira-2 is the second version of the Aira instruction-tuned series. Aira-2-1B1 is an instruction-tuned model based on [TinyLlama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T). The model was trained with a dataset composed of prompts and completions generated synthetically by prompting already-tuned models (ChatGPT, Llama, Open-Assistant, etc).
Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo).
## Details
- **Size:** 1,261,545,472 parameters
- **Dataset:** [Instruct-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/instruct-aira-dataset)
- **Language:** English
- **Number of Epochs:** 3
- **Batch size:** 4
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Emissions:** 1.71 KgCO2 (Singapore)
- **Total Energy Consumption:** 3.51 kWh
This repository has the [source code](https://github.com/Nkluge-correa/Aira) used to train this model.
## Usage
Three special tokens are used to mark the user side of the interaction and the model's response:
`<|startofinstruction|>`What is a language model?`<|endofinstruction|>`A language model is a probability distribution over a vocabulary.`<|endofcompletion|>`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-2-1B1')
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-2-1B1')
aira.eval()
aira.to(device)
question = input("Enter your question: ")
inputs = tokenizer(tokenizer.bos_token + question + tokenizer.sep_token,
add_special_tokens=False,
return_tensors="pt").to(device)
responses = aira.generate(**inputs, num_return_sequences=2)
print(f"Question: 👤 {question}\n")
for i, response in enumerate(responses):
print(f'Response {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```
The model will output something like:
```markdown
>>>Question: 👤 What is the capital of Brazil?
>>>Response 1: 🤖 The capital of Brazil is Brasília.
>>>Response 2: 🤖 The capital of Brazil is Brasília.
```
## Limitations
- **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination.
- **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities.
- **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given.
## Evaluation
| Model | Average | [ARC](https://arxiv.org/abs/1803.05457) | [TruthfulQA](https://arxiv.org/abs/2109.07958) | [ToxiGen](https://arxiv.org/abs/2203.09509) |
|---------------------------------------------------------------|-----------|-----------------------------------------|------------------------------------------------|---------------------------------------------|
| [Aira-2-1B1](https://huggingface.co/nicholasKluge/Aira-2-1B1) | **42.55** | 25.26 | **50.81** | **51.59** |
| TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T | 37.52 | **30.89** | 39.55 | 42.13 |
* Evaluations were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)).
## Cite as 🤗
```latex
@misc{nicholas22aira,
doi = {10.5281/zenodo.6989727},
url = {https://github.com/Nkluge-correa/Aira},
author = {Nicholas Kluge Corrêa},
title = {Aira},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
}
@phdthesis{kluge2024dynamic,
title={Dynamic Normativity},
author={Kluge Corr{\^e}a, Nicholas},
year={2024},
school={Universit{\"a}ts-und Landesbibliothek Bonn}
}
```
## License
Aira-2-1B1 is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
|
royallab/PsyOrca2-13b-DARE | royallab | 2023-11-28T22:53:56Z | 1,631 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-28T22:26:23Z | ---
license: other
license_name: microsoft-research-license
license_link: LICENSE
language:
- en
---
# PsyOrca2-DARE-13b
This is a [Llama 2](https://huggingface.co/meta-llama/Llama-2-70b)-based model consisting of a merge between:
- [KoboldAI/PsyFighter-2-13b](https://huggingface.co/KoboldAI/Psyfighter-2-13B) (FP16 not available to the public yet. However, the merge config is.)
- [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) (with a fixed vocab size by merging on llama-2-13b)
The goal of this merge is to test out the DARE merge algorithm and see how it works with these two models.
Mergekit config (Inspired from Charles Goddard):
```yml
models:
- model: KoboldAI/Psyfighter-2-13B
parameters:
weight: 1
density: 1
- model: microsoft/Orca-2-13b
parameters:
weight: 0.05
density: 0.30
merge_method: dare_ties
base_model: meta-llama/Llama-2-13b-hf
parameters:
int8_mask: true
dtype: bfloat16
```
## Usage
This model will most likely follow the Alpaca instruct format. It can also follow Orca ChatML due to having Orca merged in.
Alpaca:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details.
## Donate?
All my infrastructure and cloud expenses are paid out of pocket. If you'd like to donate, you can do so here: https://ko-fi.com/kingbri
You should not feel obligated to donate, but if you do, I'd appreciate it.
---
|
Fredithefish/OpenZephyrChat | Fredithefish | 2023-12-12T21:31:42Z | 1,631 | 8 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"zephyr",
"openchat",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-10T11:57:39Z | ---
license: apache-2.0
library_name: transformers
language:
- en
tags:
- mistral
- merge
- zephyr
- openchat
---
<div align="center">
<img src="https://huggingface.co/Fredithefish/OpenZephyrChat/resolve/main/logo.jpeg" alt="Alt Text" width="250"/>
<h1>🔥 OpenZephyrChat - Merging Zephyr-beta with OpenChat-3.5 🔥</h1>
This Model is a ties merge between Zephyr-beta and OpenChat-3.5, it was done by using the <a>https://github.com/cg123/mergekit</a> repository
</div>
# Model Information
- **License:** Apache-2 (commercially usable)
- **Parameter size:** 7B
- **Merge:**
- [OpenChat](https://huggingface.co/openchat/openchat_3.5) with ```--weight 0.6 --density 0.5```
- [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) with ```--weight 0.3 --density 0.5```
- [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) serving as the base model
## Evals
<table>
<tr>
<td>BENCHMARK</td>
<td>SCORE</td>
</tr>
<tr>
<td>ARC</td>
<td>64.85</td>
</tr>
<tr>
<td>HellaSwag</td>
<td>85.08</td>
</tr>
<tr>
<td>MMLU</td>
<td>64.92</td>
</tr>
<tr>
<td>TruthfulQA</td>
<td>48.24</td>
</tr>
<tr>
<td>Winogrande</td>
<td>81.06</td>
</tr>
<tr>
<td>GSM8K</td>
<td>64.59</td>
</tr>
</table>
<a href="https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard">Source</a>
<style>
table {
width: 50%;
border-collapse: collapse;
margin: 20px;
}
th, td {
border: 1px solid black;
padding: 10px;
text-align: left;
}
th {
background-color: #f2f2f2;
}
</style>
|
digitous/Janin-GPTJ | digitous | 2023-02-24T08:05:06Z | 1,630 | 0 | transformers | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-02-24T02:15:54Z | ---
license: creativeml-openrail-m
---
|
mncai/chatdoctor | mncai | 2023-04-18T11:12:24Z | 1,630 | 10 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-04-01T04:39:07Z | ---
license: apache-2.0
---
|
PocketDoc/Dans-PersonalityEngine-13b | PocketDoc | 2023-06-13T04:14:23Z | 1,630 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-06-11T23:42:20Z | ---
language:
- en
---
### Description:
This is a multipurpose chat / chat instruct hybrid model in the same vein as the Pygmalion team's Metharme. It uses a curated pile of training data that has been normalized into a consistent training format. It has been trained on a wide array of one shot instructions, multi round instructions, and role playing scenarios.
### Prompt format:
Metharme
The prompt should start with the cursor on the same line directly after "<|model|>" with no space. The following are all valid formats and can be extended to as many rounds as desired.
```
<|system|>system message here<|user|>user message here<|model|>
```
```
<|system|>system message here<|user|>user message here<|model|>model message<|user|>user message here<|model|>
```
```
<|system|>system message here<|model|>
```
```
<|system|>system message here<|model|>model message<|user|>user message here<|model|>
```
Some example prompts:
```
<|system|>The following is a transcript between a helpful assistant and a user.<|user|>Why is the sky blue?<|model|>
```
```
<|system|>You are a Virtual Story Generator. You take the user's input and create an excellent and captivating story that goes in that direction. Use an abundance of sensory descriptions and eloquent prose.<|user|>Alpha Centauri has fallen, to the bears. This is a point of view tale about a soldier on the ground.<|model|>
```
```
<|system|>You are a professional editor with decades of experience, help the user with any task they have for you.<|user|>Can you rewrite this to flow better? "I knew I probably shouldnt have done that but oh well"<|model|>
```
More will be added at a later date.
### Perplexity Benchmarks:
- TBA
### Training information:
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- GPTQ 4 bit LoRA
- 7 Epochs
- 64 / 32 R / A
- 2048 Cutoff
- 18 hours on 4x RTX 4090s
### Data used in training:
- TBA
### Models used:
For training:
https://huggingface.co/PocketDoc/llama-13b-gptq-4bit-128g
For merging:
https://huggingface.co/PocketDoc/Dans-PersonalityEngine-13b-LoRA
and
https://huggingface.co/huggyllama/llama-13b
### Disclaimer:
It has not been aligned and no warranty is given for the quality or safety of its outputs. |
Andron00e/YetAnother_Open-Llama-3B-LoRA | Andron00e | 2023-07-21T12:16:16Z | 1,630 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-21T10:51:24Z | Entry not found |
Aspik101/Llama-2-7b-hf-instruct-pl-lora_unload | Aspik101 | 2023-07-23T14:27:54Z | 1,630 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-23T14:18:11Z | Internal Error - We're working hard to fix this as soon as possible! |
jondurbin/airoboros-l2-7b-gpt4-m2.0 | jondurbin | 2023-08-14T10:08:05Z | 1,630 | 12 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-m2.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-28T06:20:01Z | ---
license: other
datasets:
- jondurbin/airoboros-gpt4-m2.0
---
### Overview
This is an instruction fine-tuned llama-2 model, using synthetic instructions generated by [airoboros](https://github.com/jondurbin/airoboros)
- The 2.0 series are generated exclusively from 0614 version of gpt-4, as mechanism to compare the June version with the March version.
- The m2.0 series have the 1.4.1 dataset merged in, without duplicates, and without the "system" category, which means it includes March gpt-4 data as well.
- 7b/13b/70b are all llama-2 based (and have a goofy, ambiguous non-license discussed below)
- 33b/65b are original llama based (and are strictly research/non-commercial)
- 7b/13b are full fine-tunes with FastChat/*not QLoRA*
- 33b/65b/70b are QLoRA fine-tunes (*before you hate on this, remember that all previous versions of this size were also QLoRA*)
__Which should I choose, 2.0 or m2.0?__ I have no idea, try them both and see which is better. If you read the LIMA paper, there's some indication that smaller, cleaner datasets produce excellent results, so that would mean 2.0 is probably a better choice. If you really enjoyed 1.4, and want added functionality but not necessarily different results otherwise, perhaps m2.0.
### Prompt format
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
Why the "regardless of ..." part?
- laws vary widely based on time and location
- language model may conflate certain words with laws, e.g. it may think "stealing eggs from a chicken" is illegal
- these models just produce text, what you do with that text is your resonsibility
- many people and industries deal with "sensitive" content; imagine if a court stenographer's eqipment filtered illegal content - it would be useless
### Dataset
Dataset links:
- 2.0 series https://hf.co/datasets/jondurbin/airoboros-gpt4-2.0
- merged/m2.0 series https://hf.co/datasets/jondurbin/airoboros-gpt4-m2.0
Dataset creation details/configuration: https://gist.github.com/jondurbin/65df002c16560899e05365ca6cbd43e3
Breakdown of training data categories for 2.0/m2.0 datasets:

### Helpful usage tips
*The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.*
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
### Contribute
If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data,
take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details.
To help me with the OpenAI/compute costs:
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros 2.0/m2.0 models are built on top of either llama or llama-2. Any model with `-l2-` in the name uses llama2, `..-33b-...` and `...-65b-...` are based on the original llama.
#### Llama (original) models
If the model was based on the original llama (33b/65b), the license is __cc-by-nc-4.0__ and is for research/academic use only -- no commercial usage whatsoever!
#### Llama-2 models
Base model has a custom Meta license:
- See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta.
- See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta.
The fine-tuning data was generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me. |
Lajonbot/WizardLM-13B-V1.2-PL-lora_unload | Lajonbot | 2023-08-02T06:53:49Z | 1,630 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"facebook",
"meta",
"llama-2",
"pl",
"dataset:Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-02T06:44:20Z | ---
language:
- pl
datasets:
- Lajonbot/alpaca-dolly-chrisociepa-instruction-only-polish
license: other
model_type: llama-2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
--- |
uukuguy/speechless-orca-platypus-coig-lite-4k-0.5e-13b | uukuguy | 2023-08-30T19:51:02Z | 1,630 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"dataset:Open-Orca/OpenOrca",
"dataset:BAAI/COIG-PC-Lite",
"arxiv:2308.07317",
"arxiv:2306.02707",
"arxiv:2301.13688",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-30T19:09:48Z | ---
language:
- en
datasets:
- garage-bAInd/Open-Platypus
- Open-Orca/OpenOrca
- BAAI/COIG-PC-Lite
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
<p><h1>🐋 The First Chinese OrcaPlatypus! 🐋</h1></p>
Fine-tune the Open-Orca/OpenOrca-Platypus2-13B with 10% COIG-PC-LITE, 10% OpenOrca and 100% Open-Platypus for Chinese capability. Context window size 4KB.
<p><h1>🐋 The First OrcaPlatypus! 🐋</h1></p>

# OpenOrca-Platypus2-13B
OpenOrca-Platypus2-13B is a merge of [`garage-bAInd/Platypus2-13B`](https://huggingface.co/garage-bAInd/Platypus2-13B) and [`Open-Orca/OpenOrcaxOpenChat-Preview2-13B`](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B).
This model is more than the sum of its parts! We are happy to be teaming up with the [Platypus](https://platypus-llm.github.io/) team to bring you a new model which once again tops the leaderboards!
Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners.
We will also give sneak-peak announcements on our Discord, which you can find here:
https://AlignmentLab.ai
# Evaluation
## HuggingFace Leaderboard Performance

| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 59.5 |
| ARC (25-shot) | 62.88 |
| HellaSwag (10-shot) | 83.19 |
| TruthfulQA (0-shot) | 52.69 |
| Avg. | 64.56 |
We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard.
Please see below for detailed instructions on reproducing benchmark results.
## AGIEval Performance
We compare our results to our base Preview2 model (using LM Evaluation Harness).
We find **112%** of the base model's performance on AGI Eval, averaging **0.463**.
A large part of this boost is the substantial improvement to LSAT Logical Reasoning performance.

## BigBench-Hard Performance
We compare our results to our base Preview2 model (using LM Evaluation Harness).
We find **105%** of the base model's performance on BigBench-Hard, averaging **0.442**.

# Model Details
* **Trained by**: **Platypus2-13B** trained by Cole Hunter & Ariel Lee; **OpenOrcaxOpenChat-Preview2-13B** trained by Open-Orca
* **Model type:** **OpenOrca-Platypus2-13B** is an auto-regressive language model based on the Lllama 2 transformer architecture.
* **Language(s)**: English
* **License for Platypus2-13B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
* **License for OpenOrcaxOpenChat-Preview2-13B base weights**: Llama 2 Commercial
# Prompting
## Prompt Template for base Platypus2-13B
```
### Instruction:
<prompt> (without the <>)
### Response:
```
## Prompt Template for base OpenOrcaxOpenChat-Preview2-13B
OpenChat Llama2 V1: see [OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B) for additional information.
# Training
## Training Datasets
`garage-bAInd/Platypus2-13B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
`Open-Orca/OpenOrcaxOpenChat-Preview2-13B` trained using a refined subset of most of the GPT-4 data from the [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca).
## Training Procedure
`Open-Orca/Platypus2-13B` was instruction fine-tuned using LoRA on 1x A100-80GB.
For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo.
# Supplemental
## Reproducing Evaluation Results (for HuggingFace Leaderboard Eval)
Install LM Evaluation Harness:
```
# clone repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# change to repo directory
cd lm-evaluation-harness
# check out the correct commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# install
pip install -e .
```
Each task was evaluated on a single A100-80GB GPU.
ARC:
```
python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/arc_challenge_25shot.json --device cuda --num_fewshot 25
```
HellaSwag:
```
python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/hellaswag_10shot.json --device cuda --num_fewshot 10
```
MMLU:
```
python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/mmlu_5shot.json --device cuda --num_fewshot 5
```
TruthfulQA:
```
python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/truthfulqa_0shot.json --device cuda
```
## Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
# Citations
```bibtex
@software{hunterlee2023orcaplaty1
title = {OpenOrcaPlatypus: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset and Merged with divergent STEM and Logic Dataset Model},
author = {Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz and Bleys Goodson and Wing Lian and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B},
}
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
@software{OpenOrcaxOpenChatPreview2,
title = {OpenOrcaxOpenChatPreview2: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset},
author = {Guan Wang and Bleys Goodson and Wing Lian and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B},
}
@software{openchat,
title = {{OpenChat: Advancing Open-source Language Models with Imperfect Data}},
author = {Wang, Guan and Cheng, Sijie and Yu, Qiying and Liu, Changling},
doi = {10.5281/zenodo.8105775},
url = {https://github.com/imoneoi/openchat},
version = {pre-release},
year = {2023},
month = {7},
}
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
@article{hu2021lora,
title={LoRA: Low-Rank Adaptation of Large Language Models},
author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu},
journal={CoRR},
year={2021}
}
```
|
Mikivis/gpt2-large-lora-sft1 | Mikivis | 2023-09-10T02:28:25Z | 1,630 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:customized",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-06T11:16:28Z | ---
base_model: /data1/sunhaoyu/pretrained-models/gpt2-large-lora-sft
tags:
- generated_from_trainer
datasets:
- customized
model-index:
- name: gpt2-large-lora-sft1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large-lora-sft1
This model is a fine-tuned version of [/data1/sunhaoyu/pretrained-models/gpt2-large-lora-sft](https://huggingface.co//data1/sunhaoyu/pretrained-models/gpt2-large-lora-sft) on the customized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00015
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 6
- total_train_batch_size: 6
- total_eval_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.5
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.3
|
wei123602/llama-13b-FINETUNE3 | wei123602 | 2023-09-09T11:53:10Z | 1,630 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-09T11:27:49Z | Entry not found |
dhmeltzer/Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged | dhmeltzer | 2023-11-17T21:20:39Z | 1,630 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-11T03:35:53Z |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__Llama-2-7b-hf-eli5-cleaned-wiki65k-1024_qlora_merged)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 43.55 |
| ARC (25-shot) | 53.67 |
| HellaSwag (10-shot) | 78.09 |
| MMLU (5-shot) | 45.63 |
| TruthfulQA (0-shot) | 41.72 |
| Winogrande (5-shot) | 73.56 |
| GSM8K (5-shot) | 5.61 |
| DROP (3-shot) | 6.6 |
|
wei123602/Llama-2-13b-FINETUNE4_TEST | wei123602 | 2023-09-18T13:36:58Z | 1,630 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-18T13:15:52Z | Entry not found |
maddes8cht/ehartford-WizardLM-Uncensored-Falcon-7b-gguf | maddes8cht | 2023-11-22T20:26:11Z | 1,630 | 7 | null | [
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2023-09-24T15:25:34Z | ---
license: apache-2.0
---
[]()
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
# WizardLM-Uncensored-Falcon-7b - GGUF
- Model creator: [ehartford](https://huggingface.co/ehartford)
- Original model: [WizardLM-Uncensored-Falcon-7b](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-7b)
# K-Quants in Falcon 7b models
New releases of Llama.cpp now support K-quantization for previously incompatible models, in particular all Falcon 7B models (While Falcon 40b is and always has been fully compatible with K-Quantisation). This is achieved by employing a fallback solution for model layers that cannot be quantized with real K-quants.
For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing *different* legacy quantization types Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
# About GGUF format
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
# Quantization variants
There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
# Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
## Note:
Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
(This mainly refers to Falcon 7b and Starcoder models)
# K-quants
K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
So, if possible, use K-quants.
With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
---
# Original Model Card:
This is WizardLM trained on top of tiiuae/falcon-7b, with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note:
An uncensored model has no guardrails.
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
Prompt format is Wizardlm.
```
What is a falcon? Can I keep one as a pet?
### Response:
```
***End of original Model File***
---
## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
<center>
[](https://maddes8cht.github.io)
[](https://stackexchange.com/users/26485911)
[](https://github.com/maddes8cht)
[](https://huggingface.co/maddes8cht)
[](https://twitter.com/maddes1966)
</center> |
posicube/Llama2-chat-AYB-13B | posicube | 2023-10-06T08:40:15Z | 1,630 | 14 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2306.02707",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-03T03:06:14Z | ---
license: llama2
---
This is a model diverged from Llama-2-13b-chat-hf. We hypotheize that if we find a method to ensemble the top rankers in each benchmark effectively, its performance maximizes as well. Following this intuition, we ensembled the top models in each benchmarks to create our model.
# Model Details
- **Developed by**: Posicube Inc.
- **Backbone Model**: LLaMA-2-13b-chat
- **Library**: HuggingFace Transformers
- **Used Dataset Details**
Orca-style datasets, Alpaca-style datasets
# Evaluation
We achieved the top ranker among 13B models with this model at Oct-3rd 2023.
| Metric |Scores on Leaderboard| Our results |
|---------------------|---------------------|-------------|
| ARC (25-shot) | 63.4 | 63.48 |
| HellaSwag (10-shot) | 84.79 | 84.87 |
| MMLU (5-shot) | 59.34 | 59.59 |
| TruthfulQA (0-shot) | 55.62 | 55.22 |
| Avg. | 65.79 | 65.78 |
# Limitations & Biases:
Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
# License Disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
# Contact Us
[Posicube](https://www.posicube.com/)
# Citiation:
Please kindly cite using the following BibTeX:
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
``` |
TheBloke/Orca-2-13B-GGUF | TheBloke | 2023-11-21T08:35:49Z | 1,630 | 62 | transformers | [
"transformers",
"gguf",
"llama",
"orca",
"orca2",
"microsoft",
"text-generation",
"arxiv:2311.11045",
"base_model:microsoft/Orca-2-13b",
"license:other",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-21T08:27:56Z | ---
base_model: microsoft/Orca-2-13b
inference: false
license: other
model_creator: Microsoft
model_name: Orca 2 13B
model_type: llama
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- orca
- orca2
- microsoft
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Orca 2 13B - GGUF
- Model creator: [Microsoft](https://huggingface.co/microsoft)
- Original model: [Orca 2 13B](https://huggingface.co/microsoft/Orca-2-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Microsoft's Orca 2 13B](https://huggingface.co/microsoft/Orca-2-13b).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Orca-2-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Orca-2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Orca-2-13B-GGUF)
* [Microsoft's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/microsoft/Orca-2-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [orca-2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Orca-2-13B-GGUF/blob/main/orca-2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [orca-2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Orca-2-13B-GGUF/blob/main/orca-2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [orca-2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Orca-2-13B-GGUF/blob/main/orca-2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [orca-2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Orca-2-13B-GGUF/blob/main/orca-2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [orca-2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Orca-2-13B-GGUF/blob/main/orca-2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [orca-2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Orca-2-13B-GGUF/blob/main/orca-2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [orca-2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Orca-2-13B-GGUF/blob/main/orca-2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [orca-2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Orca-2-13B-GGUF/blob/main/orca-2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [orca-2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Orca-2-13B-GGUF/blob/main/orca-2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [orca-2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Orca-2-13B-GGUF/blob/main/orca-2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [orca-2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Orca-2-13B-GGUF/blob/main/orca-2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [orca-2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Orca-2-13B-GGUF/blob/main/orca-2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Orca-2-13B-GGUF and below it, a specific filename to download, such as: orca-2-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Orca-2-13B-GGUF orca-2-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Orca-2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Orca-2-13B-GGUF orca-2-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m orca-2-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Orca-2-13B-GGUF", model_file="orca-2-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Microsoft's Orca 2 13B
# Orca 2
<!-- Provide a quick summary of what the model is/does. -->
Orca 2 is a helpful assistant that is built for research purposes only and provides a single turn response
in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization.
The model is designed to excel particularly in reasoning.
We open-source Orca 2 to encourage further research on the development, evaluation, and alignment of smaller LMs.
## What is Orca 2’s intended use(s)?
+ Orca 2 is built for research purposes only.
+ The main purpose is to allow the research community to assess its abilities and to provide a foundation for
building better frontier models.
## How was Orca 2 evaluated?
+ Orca 2 has been evaluated on a large number of tasks ranging from reasoning to grounding and safety. Please refer
to Section 6 and Appendix in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf) for details on evaluations.
## Model Details
Orca 2 is a finetuned version of LLAMA-2. Orca 2’s training data is a synthetic dataset that was created to enhance the small model’s reasoning abilities.
All synthetic training data was moderated using the Microsoft Azure content filters. More details about the model can be found in the [Orca 2 paper](https://arxiv.org/pdf/2311.11045.pdf).
Please refer to LLaMA-2 technical report for details on the model architecture.
## License
Orca 2 is licensed under the [Microsoft Research License](LICENSE).
Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved.
## Bias, Risks, and Limitations
Orca 2, built upon the LLaMA 2 model family, retains many of its limitations, as well as the
common limitations of other large language models or limitation caused by its training process,
including:
**Data Biases**: Large language models, trained on extensive data, can inadvertently carry
biases present in the source data. Consequently, the models may generate outputs that could
be potentially biased or unfair.
**Lack of Contextual Understanding**: Despite their impressive capabilities in language understanding and generation, these models exhibit limited real-world understanding, resulting
in potential inaccuracies or nonsensical responses.
**Lack of Transparency**: Due to the complexity and size, large language models can act
as “black boxes”, making it difficult to comprehend the rationale behind specific outputs or
decisions. We recommend reviewing transparency notes from Azure for more information.
**Content Harms**: There are various types of content harms that large language models
can cause. It is important to be aware of them when using these models, and to take
actions to prevent them. It is recommended to leverage various content moderation services
provided by different companies and institutions. On an important note, we hope for better
regulations and standards from government and technology leaders around content harms
for AI technologies in future. We value and acknowledge the important role that research
and open source community can play in this direction.
**Hallucination**: It is important to be aware and cautious not to entirely rely on a given
language model for critical decisions or information that might have deep impact as it is
not obvious how to prevent these models from fabricating content. Moreover, it is not clear
whether small models may be more susceptible to hallucination in ungrounded generation
use cases due to their smaller sizes and hence reduced memorization capacities. This is an
active research topic and we hope there will be more rigorous measurement, understanding
and mitigations around this topic.
**Potential for Misuse**: Without suitable safeguards, there is a risk that these models could
be maliciously used for generating disinformation or harmful content.
**Data Distribution**: Orca 2’s performance is likely to correlate strongly with the distribution
of the tuning data. This correlation might limit its accuracy in areas underrepresented in
the training dataset such as math, coding, and reasoning.
**System messages**: Orca 2 demonstrates variance in performance depending on the system
instructions. Additionally, the stochasticity introduced by the model size may lead to
generation of non-deterministic responses to different system instructions.
**Zero-Shot Settings**: Orca 2 was trained on data that mostly simulate zero-shot settings.
While the model demonstrate very strong performance in zero-shot settings, it does not show
the same gains of using few-shot learning compared to other, specially larger, models.
**Synthetic data**: As Orca 2 is trained on synthetic data, it could inherit both the advantages
and shortcomings of the models and methods used for data generation. We posit that Orca
2 benefits from the safety measures incorporated during training and safety guardrails (e.g.,
content filter) within the Azure OpenAI API. However, detailed studies are required for
better quantification of such risks.
This model is solely designed for research settings, and its testing has only been carried
out in such environments. It should not be used in downstream applications, as additional
analysis is needed to assess potential harm or bias in the proposed application.
## Getting started with Orca 2
**Inference with Hugging Face library**
```python
import torch
import transformers
if torch.cuda.is_available():
torch.set_default_device("cuda")
else:
torch.set_default_device("cpu")
model = transformers.AutoModelForCausalLM.from_pretrained("microsoft/Orca-2-13b", device_map='auto')
# https://github.com/huggingface/transformers/issues/27132
# please use the slow tokenizer since fast and slow tokenizer produces different tokens
tokenizer = transformers.AutoTokenizer.from_pretrained(
"microsoft/Orca-2-13b",
use_fast=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "How can you determine if a restaurant is popular among locals or mainly attracts tourists, and why might this information be useful?"
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
output_ids = model.generate(inputs["input_ids"],)
answer = tokenizer.batch_decode(output_ids)[0]
print(answer)
# This example continues showing how to add a second turn message by the user to the conversation
second_turn_user_message = "Give me a list of the key points of your first answer."
# we set add_special_tokens=False because we dont want to automatically add a bos_token between messages
second_turn_message_in_markup = f"\n<|im_start|>user\n{second_turn_user_message}<|im_end|>\n<|im_start|>assistant"
second_turn_tokens = tokenizer(second_turn_message_in_markup, return_tensors='pt', add_special_tokens=False)
second_turn_input = torch.cat([output_ids, second_turn_tokens['input_ids']], dim=1)
output_ids_2 = model.generate(second_turn_input,)
second_turn_answer = tokenizer.batch_decode(output_ids_2)[0]
print(second_turn_answer)
```
**Safe inference with Azure AI Content Safety**
The usage of [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety/) on top of model prediction is strongly encouraged
and can help prevent content harms. Azure AI Content Safety is a content moderation platform
that uses AI to keep your content safe. By integrating Orca 2 with Azure AI Content Safety,
we can moderate the model output by scanning it for sexual content, violence, hate, and
self-harm with multiple severity levels and multi-lingual detection.
```python
import os
import math
import transformers
import torch
from azure.ai.contentsafety import ContentSafetyClient
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeTextOptions
CONTENT_SAFETY_KEY = os.environ["CONTENT_SAFETY_KEY"]
CONTENT_SAFETY_ENDPOINT = os.environ["CONTENT_SAFETY_ENDPOINT"]
# We use Azure AI Content Safety to filter out any content that reaches "Medium" threshold
# For more information: https://learn.microsoft.com/en-us/azure/ai-services/content-safety/
def should_filter_out(input_text, threshold=4):
# Create an Content Safety client
client = ContentSafetyClient(CONTENT_SAFETY_ENDPOINT, AzureKeyCredential(CONTENT_SAFETY_KEY))
# Construct a request
request = AnalyzeTextOptions(text=input_text)
# Analyze text
try:
response = client.analyze_text(request)
except HttpResponseError as e:
print("Analyze text failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
categories = ["hate_result", "self_harm_result", "sexual_result", "violence_result"]
max_score = -math.inf
for category in categories:
max_score = max(max_score, getattr(response, category).severity)
return max_score >= threshold
model_path = 'microsoft/Orca-2-13b'
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = transformers.AutoModelForCausalLM.from_pretrained(model_path)
model.to(device)
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_path,
model_max_length=4096,
padding_side="right",
use_fast=False,
add_special_tokens=False,
)
system_message = "You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior."
user_message = "\" \n :You can't just say, \"\"that's crap\"\" and remove it without gaining a consensus. You already know this, based on your block history. —/ \" \nIs the comment obscene? \nOptions : Yes, No."
prompt = f"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{user_message}<|im_end|>\n<|im_start|>assistant"
inputs = tokenizer(prompt, return_tensors='pt')
inputs = inputs.to(device)
output_ids = model.generate(inputs["input_ids"], max_length=4096, do_sample=False, temperature=0.0, use_cache=True)
sequence_length = inputs["input_ids"].shape[1]
new_output_ids = output_ids[:, sequence_length:]
answers = tokenizer.batch_decode(new_output_ids, skip_special_tokens=True)
final_output = answers[0] if not should_filter_out(answers[0]) else "[Content Filtered]"
print(final_output)
```
## Citation
```bibtex
@misc{mitra2023orca,
title={Orca 2: Teaching Small Language Models How to Reason},
author={Arindam Mitra and Luciano Del Corro and Shweti Mahajan and Andres Codas and Clarisse Simoes and Sahaj Agrawal and Xuxi Chen and Anastasia Razdaibiedina and Erik Jones and Kriti Aggarwal and Hamid Palangi and Guoqing Zheng and Corby Rosset and Hamed Khanpour and Ahmed Awadallah},
year={2023},
eprint={2311.11045},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
<!-- original-model-card end -->
|
ajibawa-2023/SlimOrca-13B | ajibawa-2023 | 2024-03-04T12:12:50Z | 1,630 | 10 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:Open-Orca/SlimOrca",
"dataset:ajibawa-2023/SlimOrca-ShareGPT",
"license:cc-by-nc-nd-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-27T17:56:25Z | ---
language:
- en
license: cc-by-nc-nd-4.0
datasets:
- Open-Orca/SlimOrca
- ajibawa-2023/SlimOrca-ShareGPT
model-index:
- name: SlimOrca-13B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.4
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.04
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 49.37
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 39.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/SlimOrca-13B
name: Open LLM Leaderboard
---
**SlimOrca-13B: A General Purpose Intelligent Model**
This Model is trained on refined version of SlimOrca made available by [Open-Orca](https://huggingface.co/Open-Orca) team.
The idea was to check how this Model will perform in the absence of "system" prompt/instruction.
This Model is very good in various types of General Purpose content generation such as Q&A (including multiple choice), Articles from Summary, Sentiment Analysis,
Context & Hypothesis, Reviews, Erotic story generation etc.
It can also generate Uncensored content. Kindly be careful while generating Uncensored content as you will be responsible for what you
generate.
It is trained on 517981 set of conversations. Each set having 2 conversations. I have shared this [data](https://huggingface.co/datasets/ajibawa-2023/SlimOrca-ShareGPT).
All the credit goes to the Open-Orca team for releasing SlimOrca dataset.
**Training:**
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took almost 11 Days. DeepSpeed codebase was used for training purpose.
Entire data is trained on Llama-2 by Meta.
This is a full fine tuned model. Links for quantized models are given below.
**GPTQ GGML & AWQ**
GPTQ: [Link](https://huggingface.co/TheBloke/SlimOrca-13B-GPTQ)
GGUF: [Link](https://huggingface.co/TheBloke/SlimOrca-13B-GGUF)
AWQ: [Link](https://huggingface.co/TheBloke/SlimOrca-13B-AWQ)
Special Thanks to [TheBloke](https://huggingface.co/TheBloke) for making these models available.
**Example Prompt:**
```
This is a conversation with your Assistant. It is a computer program designed to help you with various tasks such as answering questions, providing recommendations, and helping with decision making. You can ask it anything you want and it will do its best to give you accurate and relevant information.
Context
You are a helpful AI assistant.
USER: <prompt>
ASSISTANT:
```
You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 .
I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.
Thank you for your love & support.
**Example Output**
Example 1

Example 2

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__SlimOrca-13B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |60.39|
|AI2 Reasoning Challenge (25-Shot)|60.15|
|HellaSwag (10-Shot) |81.40|
|MMLU (5-Shot) |57.04|
|TruthfulQA (0-shot) |49.37|
|Winogrande (5-shot) |74.43|
|GSM8k (5-shot) |39.95|
|
uukuguy/speechless-code-mistral-7b-v2.0 | uukuguy | 2023-12-09T13:18:53Z | 1,630 | 2 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"code",
"en",
"dataset:jondurbin/airoboros-2.2",
"dataset:Open-Orca/OpenOrca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:TokenBender/python_eval_instruct_51k",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:meta-math/MetaMathQA",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-09T13:10:23Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- jondurbin/airoboros-2.2
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
- WizardLM/WizardLM_evol_instruct_V2_196k
- TokenBender/python_eval_instruct_51k
- ise-uiuc/Magicoder-OSS-Instruct-75K
- meta-math/MetaMathQA
tags:
- code
license: apache-2.0
model-index:
- name: SpeechlessCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value:
verified: false
---
<p><h1> speechless-code-mistral-7b-v2.0 </h1></p>
Code: https://github.com/uukuguy/speechless
Use the following dataset to fine-tune mistralai/Mistral-7B-v0.1 in order to improve the model's reasoning and planning abilities.
Total 343,370 samples 603 MB
- jondurbin/airoboros-2.2: Filter categories related to coding, reasoning and planning. 21,923 samples.
- Open-Orca/OpenOrca: Filter the 'cot' category in 1M GPT4 dataset. 62,973 samples.
- garage-bAInd/Open-Platypus: 100%, 22,760 samples.
- WizardLM/WizardLM_evol_instruct_V2_196k: Coding coversation part. 30,077 samples
- TokenBender/python_eval_instruct_51k: “python” in output .39,596 samples
- OpenHermes code block in output 18,969 samples
- CollectiveCognition-2023-09-27 200 samples
- ise-uiuc/Magicoder-OSS-Instruct-75K 75,197 samples
- meta-math/MetaMathQA 20% 395K 71,706 samples
## HumanEval
| Metric | Value |
| --- | --- |
| humaneval-python | |
[Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
CodeLlama-34B-Python: 53.29
CodeLlama-34B-Instruct: 50.79
CodeLlama-13B-Instruct: 50.6
CodeLlama-34B: 45.11
CodeLlama-13B-Python: 42.89
CodeLlama-13B: 35.07
## lm-evaluation-harness
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC ||
| HellaSwag | |
| MMLU | |
| TruthfulQA | |
| Average | |
|
sreeramajay/TinyLlama-1.1B-orca-v1.0 | sreeramajay | 2024-01-08T02:54:22Z | 1,630 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-07T15:23:07Z | ---
license: apache-2.0
datasets:
- Intel/orca_dpo_pairs
language:
- en
---
Applied DPO to TinyLlama-1.1B-Chat-v1.0 using orca_dpo_pairs dataset
This is only experimental Model created by following instruction from the nice Blog [Fine-tune a Mistral-7b model with Direct Preference Optimization
](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac)
You can run this model using the following code:
```python
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model=new_model,
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
# <|system|>
# You are a helpful assistant chatbot.</s>
# <|user|>
# What is a Large Language Model?</s>
# <|assistant|>
# A Large Language Model (LLM) is a type of deep learning model that processes large amounts of text or data to improve the accuracy of natural language processing tasks such as sentiment analysis, machine translation, and question answering. LLMs are trained using large datasets, which allow them to generalize better and have better performance compared to traditional machine learning models. They are capable of handling vast amounts of text and can learn complex relationships between words, phrases, and sentences, making them an essential tool for natural language processing.
```
Results on GPT4ALL benchmark:
| Tasks | Metric |Value | |Stderr|
|-------------|--------|-----:|---|-----:|
|arc_challenge|acc |0.3003|± |0.0134|
| |acc_norm|0.3276|± |0.0137|
|arc_easy |acc |0.6115|± |0.0100|
| |acc_norm|0.5354|± |0.0102|
|boolq |acc |0.6147|± |0.0085|
|hellaswag |acc |0.4633|± |0.0050|
| |acc_norm|0.6033|± |0.0049|
|openbookqa |acc |0.2480|± |0.0193|
| |acc_norm|0.3720|± |0.0216|
|piqa |acc |0.7470|± |0.0101|
| |acc_norm|0.7470|± |0.0101|
|winogrande |acc |0.6054|± |0.0137| |
Rallio67/3B-redpajama-conditional-alpha | Rallio67 | 2023-05-14T19:32:24Z | 1,629 | 1 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-05-14T18:32:08Z | # Conditional Pretraining of Large Language Models
Large language models (LLMs), such as OpenAI's ChatGPT and similar chatbot products from other organizations, have recently gained widespread adoption. These models can extend text or respond to instructions in a natural and helpful manner. Despite the core technologies behind LLMs, namely the transformer architecture and the GPT decoder-only causal language model, remaining relatively unchanged for over five years, the surge in popularity of ChatGPT can be largely attributed to recent approaches that better align the output of LLMs with users' and service providers' intentions.
## Primary Approaches for Aligning LLMs with Human Expectations
1. **Supervised finetuning (SFT) on natural instructions**
2. **Reinforcement learning from human feedback (RLHF)**
## Conditional Pretraining: A Third Approach
- Utilizes a large number of pretraining examples tagged with human-understandable classifiers
- Leverages content tagging found in many online environments
- Examples of commonly used tags:
- Suitable for work (SFW) and not suitable for work (NSFW)
- G, PG, PG-13, and R for television and movie content
## Converting Existing Pretraining Data into Conditional Pretraining Data
- Traditional pretraining involves predicting the subsequent word in minimally processed text.
- Conditional pretraining prepends training examples with descriptive tags and a brief synopsis.
## Transparency and Accountability
- Current LLMs have proprietary instructions and reward models, which can hinder public review and discussions on sensitive topics.
- Conditional pretraining tags are transparent and easily understood by auditors or end users.
## Conditional pretraining example
An example output from this conditional tagging model for a recent news article about LAION.
[`Article Here`](https://www.forbes.com/sites/hessiejones/2023/04/19/amid-growing-call-to-pause-ai-research-laion-petitions-governments-to-keep-agi-research-open-active-and-responsible/) is below. To generate these document tags only text from the body of the article was used.
```
[ artificial intelligence, open source, ai, open letter, open source ai, ai research]
# This article explains the importance of a CERN-like organization to coordinate efforts on the transparency of large-scale AI research and provides information about LAION.
```
## How to use the model
Format your inputs like this:
```
[ tag1, tag2, tag3, tag_n]
# This is a short synopsis of what kind of text I want to generate.
```
## Acknowledgement
Thank you to LAION and Stability.ai for support and compute resources to experiment with conditional pretraining.
## Disclaimer
- Conditional pretraining helps the user control the outputs of the model.
- However, these models (and all language models) can still generate undesirable content.
- So please enjoy and use with care!
|
shibing624/chinese-llama-plus-13b-hf | shibing624 | 2023-12-15T06:17:59Z | 1,629 | 19 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"chatglm",
"zh",
"Text2Text-Generation",
"LLaMA",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-05-19T02:22:18Z | ---
title: chinese-llama-plus-13b-hf
emoji: 📚
colorFrom: gray
colorTo: red
language:
- zh
tags:
- chatglm
- pytorch
- zh
- Text2Text-Generation
- LLaMA
license: other
widget:
- text: 为什么天空是蓝色的?
---
# Chinese LLaMA Plus 13B Model
**发布中文LLaMA-Plus, Alpaca-Plus 13B版本模型**
发布中文LLaMA-Plus, Alpaca-Plus 13B版本,改进点如下:
- 相比基础版进一步扩充了训练数据,其中LLaMA扩充至120G文本,Alpaca扩充至4.3M指令数据,重点增加了科学领域数据,涵盖:物理、化学、生物、医学、地球科学等
- Alpaca训练时采用了更大的rank,相比基础版具有更低的验证集损失
- Alpaca评测结果:13B获得74.3分,Plus-7B获得78.2分,Plus-13B获得80.8分,具体评测结果请参考[效果评测](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/examples)
- 多轮回复长度相比旧模型提升明显(可适当增大温度系数)
- 知识问答、写作、翻译等方面效果显著提升
本模型是 [decapoda-research/llama-13b-hf](https://huggingface.co/decapoda-research/llama-13b-hf)
底座模型 合并 [ziqingyang/chinese-llama-plus-lora-13b](https://huggingface.co/ziqingyang/chinese-llama-plus-lora-13b) LoRA权重,
并转化为HuggingFace版本权重(.bin文件),可以在此中文LLaMA模型上继续指令微调训练,LLaMA模型为底座模型,直接调用会效果不佳。
test case:
|input_text|predict|
|:-- |:--- |
|为什么天空是蓝色的?|天空是蓝色的是因为大气中的气体分子散射了太阳光中的短波长蓝光,使得我们看到的天空呈现出蓝色。|
## release model weight
- chinese-llama-plus-7b 模型权重链接:https://huggingface.co/minlik/chinese-llama-plus-7b-merged
- chinese-alpaca-plus-7b 模型权重链接:https://huggingface.co/shibing624/chinese-alpaca-plus-7b-hf
- chinese-llama-plus-13b 模型权重链接:https://huggingface.co/shibing624/chinese-llama-plus-13b-hf
- chinese-aplaca-plus-13b 模型权重链接:https://huggingface.co/shibing624/chinese-alpaca-plus-13b-hf
## Usage
本项目开源在textgen项目:[textgen](https://github.com/shibing624/textgen),可支持llama模型,通过如下命令调用:
Install package:
```shell
pip install -U textgen
```
```python
from textgen import GptModel
model = GptModel("llama", "shibing624/chinese-llama-plus-13b-hf")
r = model.predict(["用一句话描述地球为什么是独一无二的。"])
print(r) # ['地球是独一无二的,因为它拥有独特的大气层、水循环、生物多样性以及其他自然资源,这些都使它成为一个独特的生命支持系统。']
```
## Usage (HuggingFace Transformers)
Without [textgen](https://github.com/shibing624/textgen), you can use the model like this:
First, you pass your input through the transformer model, then you get the generated sentence.
Install package:
```
pip install sentencepiece
pip install transformers>=4.28.0
```
```python
import torch
import transformers
from transformers import LlamaTokenizer, LlamaForCausalLM
def generate_prompt(text):
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{text}
### Response:"""
tokenizer = LlamaTokenizer.from_pretrained('shibing624/chinese-llama-plus-13b-hf')
model = LlamaForCausalLM.from_pretrained('shibing624/chinese-llama-plus-13b-hf').half().cuda()
model.eval()
text = '为什么天空是蓝色的?'
prompt = generate_prompt(text)
input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda')
with torch.no_grad():
output_ids = model.generate(
input_ids=input_ids,
max_new_tokens=128,
temperature=1,
top_k=40,
top_p=0.9,
repetition_penalty=1.15
).cuda()
output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output.replace(text, '').strip())
```
output:
```shell
为什么天空是蓝色的?
天空是蓝色的是因为大气中的气体分子散射了太阳光中的短波长蓝光,使得我们看到的天空呈现出蓝色。
```
## 模型来源
release合并后的模型权重,一步到位直接使用,省电、减少碳排放。
基于 [多LoRA权重合并(适用于Chinese-Alpaca-Plus )](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/%E6%89%8B%E5%8A%A8%E6%A8%A1%E5%9E%8B%E5%90%88%E5%B9%B6%E4%B8%8E%E8%BD%AC%E6%8D%A2#%E5%A4%9Alora%E6%9D%83%E9%87%8D%E5%90%88%E5%B9%B6%E9%80%82%E7%94%A8%E4%BA%8Echinese-alpaca-plus)方法手动合并而成,具体是使用 [decapoda-research/llama-13b-hf](https://huggingface.co/decapoda-research/llama-13b-hf)
底座模型 合并 [ziqingyang/chinese-llama-plus-lora-13b](https://huggingface.co/ziqingyang/chinese-llama-plus-lora-13b) LoRA权重 得到,并转化为HuggingFace版本权重(.bin文件)。
HuggingFace版本权重(.bin文件)可用于:
- 使用Transformers进行训练和推理
- 使用text-generation-webui搭建界面
PyTorch版本权重(.pth文件)可用于:
- 使用llama.cpp工具进行量化和部署
PyTorch版本权重(.pth文件)链接:[shibing624/chinese-alpaca-plus-13b-pth](https://huggingface.co/shibing624/chinese-alpaca-plus-13b-pth)
模型文件组成:
```
chinese-alpaca-plus-13b-hf
|-- config.json
|-- generation_config.json
|-- LICENSE
|-- pytorch_model-00001-of-00003.bin
|-- pytorch_model-00002-of-00003.bin
|-- pytorch_model-00003-of-00003.bin
|-- pytorch_model.bin.index.json
|-- README.md
|-- special_tokens_map.json
|-- tokenizer_config.json
`-- tokenizer.model
```
硬件要求:25G显存
### 微调数据集
我整理部分公开微调数据集:
1. 50万条中文ChatGPT指令Belle数据集:[BelleGroup/train_0.5M_CN](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)
2. 100万条中文ChatGPT指令Belle数据集:[BelleGroup/train_1M_CN](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
3. 5万条英文ChatGPT指令Alpaca数据集:[50k English Stanford Alpaca dataset](https://github.com/tatsu-lab/stanford_alpaca#data-release)
4. 5万条中文GPT4指令Alpaca数据集:[shibing624/alpaca-zh](https://huggingface.co/datasets/shibing624/alpaca-zh)
5. 69万条中文指令Guanaco数据集(Belle50万条+Guanaco19万条):[Chinese-Vicuna/guanaco_belle_merge_v1.0](https://huggingface.co/datasets/Chinese-Vicuna/guanaco_belle_merge_v1.0)
如果需要训练LLaMA模型,请参考[https://github.com/shibing624/textgen](https://github.com/shibing624/textgen)
## Citation
```latex
@software{textgen,
author = {Xu Ming},
title = {textgen: Implementation of language model finetune},
year = {2023},
url = {https://github.com/shibing624/textgen},
}
```
## Reference
- https://github.com/ymcui/Chinese-LLaMA-Alpaca
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_shibing624__chinese-llama-plus-13b-hf)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 41.06 |
| ARC (25-shot) | 46.25 |
| HellaSwag (10-shot) | 71.88 |
| MMLU (5-shot) | 40.74 |
| TruthfulQA (0-shot) | 39.89 |
| Winogrande (5-shot) | 73.09 |
| GSM8K (5-shot) | 0.53 |
| DROP (3-shot) | 15.08 |
|
yhyhy3/open_llama_7b_v2_med_instruct | yhyhy3 | 2023-07-10T16:22:39Z | 1,629 | 7 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"medical",
"code",
"en",
"dataset:ehartford/dolphin",
"dataset:LinhDuong/chatdoctor-200k",
"dataset:sahil2801/code_instructions_120k",
"dataset:medalpaca/medical_meadow_mediqa",
"dataset:kaiokendev/SuperCOT-dataset",
"dataset:tiiuae/falcon-refinedweb",
"dataset:bigcode/starcoderdata",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-09T17:19:43Z | ---
license: apache-2.0
datasets:
- ehartford/dolphin
- LinhDuong/chatdoctor-200k
- sahil2801/code_instructions_120k
- medalpaca/medical_meadow_mediqa
- kaiokendev/SuperCOT-dataset
- tiiuae/falcon-refinedweb
- bigcode/starcoderdata
- togethercomputer/RedPajama-Data-1T
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- medical
- code
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model is an instruction-tuned Open LLaMa model with 7B parameters, with specialities in medical QA and code instruction.
## Model Details
<!-- Provide a longer summary of what this model is. -->
- **Model type:** LlamaForCausalLM
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model (QLoRA):** [openlm-research/open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2)
## How to Get Started with the Model
Use the code below to get started with the model.
```py
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = 'yhyhy3/open_llama_7b_v2_med_dolphin_qlora_merged'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = '''### Instruction: Answer the following question.
### Input: What is the capital of New Jersey?
### Response:'''
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
## Training Details
### Training Data
Converted the following datasets to alpaca:instruction format.
1. [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin)
- ORCA style dataset generously created by [Eric Hartford](https://huggingface.co/ehartford)
- Only used the 1 million GPT4 generated instructions file [flan1m-alpaca-uncensored.jsonl](https://huggingface.co/datasets/ehartford/dolphin/blob/main/flan1m-alpaca-uncensored.jsonl).
2. [LinhDuong/chatdoctor-200k](https://huggingface.co/datasets/LinhDuong/chatdoctor-200k)
- Refined dataset sourced from icliniq medical QA forum
3. [sahil2801/code_instructions_120k](https://huggingface.co/datasets/sahil2801/code_instructions_120k)
- Code instruction dataset generously created by Sahil Chaudhary from ThreeSixty AI
4. [medalpaca/medical_meadow_mediqa](https://huggingface.co/datasets/medalpaca/medical_meadow_mediqa)
- MEDIQA is a dataset of manually generated, question-driven summaries of multi and single document answers to consumer health questions from medalpaca group.
5. [kaiokendev/SuperCOT-dataset](https://huggingface.co/datasets/kaiokendev/SuperCOT-dataset)
- Code instruction dataset generously created by Kaio Ken
### Training Procedure
Trained using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) QLoRa on [RunPod](https://www.runpod.io/console/gpu-cloud) 8x A6000 on Community Cloud for 3 epochs (~14 hours - ~$70).
<details>
<summary>axolotl training config:</summary>
```yaml
base_model: openlm-research/open_llama_7b_v2
base_model_config: openlm-research/open_llama_7b_v2
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: true
strict: false
push_dataset_to_hub:
hub_model_id:
hf_use_auth_token:
datasets:
- path: json
type: alpaca
data_files: /disk/flan1m-alpaca-uncensored.jsonl
shards: 8
- path: sahil2801/code_instructions_120k
type: alpaca
- path: LinhDuong/chatdoctor-200k
type: alpaca
shards: 2
- path: kaiokendev/SuperCOT-dataset
type: alpaca
- path: medalpaca/medical_meadow_mediqa
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.01
adapter: qlora
lora_model_dir:
sequence_len: 2048
max_packed_sequence_len: 2048
lora_r: 8
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_mode: true
wandb_project:
wandb_watch:
wandb_run_id:
wandb_log_model: 'openllama_checkpoint'
output_dir: /disk/open_llama_7b_v2_dolphin_qlora
gradient_accumulation_steps: 2
micro_batch_size: 16
num_epochs: 3
optimizer: paged_adamw_32bit
torchdistx_path:
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention: true
flash_attention:
gptq_groupsize:
gptq_model_v1:
warmup_steps: 1000
eval_steps: 5000
save_steps:
debug:
deepspeed:
weight_decay: 0.0000001
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details> |
YeungNLP/firefly-llama-30b | YeungNLP | 2023-07-28T17:05:14Z | 1,629 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-07-27T17:40:57Z | Entry not found |
edor/Platypus2-mini-7B | edor | 2023-08-16T01:35:11Z | 1,629 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-16T00:03:30Z | ---
license: other
---
Smaller version of Platypus2, Llama2-7B finetuned w/QLoRA with [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) dataset |
zarakiquemparte/zarablend-l2-7b | zarakiquemparte | 2023-08-17T18:48:36Z | 1,629 | 10 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-17T10:28:36Z | ---
license: other
tags:
- llama2
---
# Model Card: Zarablend L2 7b
This model uses [Nous Hermes Llama2 7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) (66%) as a base with [Airoboros L2 7B GPT4 2.0](https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-2.0) (34%) and the result of this merge was merged with [LimaRP LLama2 7B Lora](https://huggingface.co/lemonilia/limarp-llama2).
This merge of models(hermes and airoboros) was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/merge-cli.py)
This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py)
Quantized Model by @TheBloke:
- [GGML](https://huggingface.co/TheBloke/Zarablend-L2-7B-GGML)
- [GPTQ](https://huggingface.co/TheBloke/Zarablend-L2-7B-GPTQ)
Merge illustration:

## Usage:
Since this is a merge between Nous Hermes, Airoboros and LimaRP, the following instruction formats should work:
Alpaca 2:
```
### Instruction:
<prompt>
### Response:
<leave a newline blank for model to respond>
```
LimaRP instruction format:
```
<<SYSTEM>>
<character card and system prompt>
<<USER>>
<prompt>
<<AIBOT>>
<leave a newline blank for model to respond>
```
## Bias, Risks, and Limitations
This model is not intended for supplying factual information or advice in any form
## Training Details
This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details. |
uukuguy/speechless-orca-platypus-coig-lite-2k-0.6e-13b | uukuguy | 2023-08-30T14:51:06Z | 1,629 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"dataset:Open-Orca/OpenOrca",
"dataset:BAAI/COIG-PC-Lite",
"arxiv:2308.07317",
"arxiv:2306.02707",
"arxiv:2301.13688",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-30T14:28:26Z | ---
language:
- en
datasets:
- garage-bAInd/Open-Platypus
- Open-Orca/OpenOrca
- BAAI/COIG-PC-Lite
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
<p><h1>🐋 The First Chinese OrcaPlatypus! 🐋</h1></p>
Fine-tune the Open-Orca/OpenOrca-Platypus2-13B with 10% COIG-PC-LITE, 10% OpenOrca and 100% Open-Platypus for Chinese capability.
<p><h1>🐋 The First OrcaPlatypus! 🐋</h1></p>

# OpenOrca-Platypus2-13B
OpenOrca-Platypus2-13B is a merge of [`garage-bAInd/Platypus2-13B`](https://huggingface.co/garage-bAInd/Platypus2-13B) and [`Open-Orca/OpenOrcaxOpenChat-Preview2-13B`](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B).
This model is more than the sum of its parts! We are happy to be teaming up with the [Platypus](https://platypus-llm.github.io/) team to bring you a new model which once again tops the leaderboards!
Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2).
[<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2)
We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners.
We will also give sneak-peak announcements on our Discord, which you can find here:
https://AlignmentLab.ai
# Evaluation
## HuggingFace Leaderboard Performance

| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 59.5 |
| ARC (25-shot) | 62.88 |
| HellaSwag (10-shot) | 83.19 |
| TruthfulQA (0-shot) | 52.69 |
| Avg. | 64.56 |
We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard.
Please see below for detailed instructions on reproducing benchmark results.
## AGIEval Performance
We compare our results to our base Preview2 model (using LM Evaluation Harness).
We find **112%** of the base model's performance on AGI Eval, averaging **0.463**.
A large part of this boost is the substantial improvement to LSAT Logical Reasoning performance.

## BigBench-Hard Performance
We compare our results to our base Preview2 model (using LM Evaluation Harness).
We find **105%** of the base model's performance on BigBench-Hard, averaging **0.442**.

# Model Details
* **Trained by**: **Platypus2-13B** trained by Cole Hunter & Ariel Lee; **OpenOrcaxOpenChat-Preview2-13B** trained by Open-Orca
* **Model type:** **OpenOrca-Platypus2-13B** is an auto-regressive language model based on the Lllama 2 transformer architecture.
* **Language(s)**: English
* **License for Platypus2-13B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/))
* **License for OpenOrcaxOpenChat-Preview2-13B base weights**: Llama 2 Commercial
# Prompting
## Prompt Template for base Platypus2-13B
```
### Instruction:
<prompt> (without the <>)
### Response:
```
## Prompt Template for base OpenOrcaxOpenChat-Preview2-13B
OpenChat Llama2 V1: see [OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B) for additional information.
# Training
## Training Datasets
`garage-bAInd/Platypus2-13B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
`Open-Orca/OpenOrcaxOpenChat-Preview2-13B` trained using a refined subset of most of the GPT-4 data from the [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca).
## Training Procedure
`Open-Orca/Platypus2-13B` was instruction fine-tuned using LoRA on 1x A100-80GB.
For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo.
# Supplemental
## Reproducing Evaluation Results (for HuggingFace Leaderboard Eval)
Install LM Evaluation Harness:
```
# clone repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# change to repo directory
cd lm-evaluation-harness
# check out the correct commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# install
pip install -e .
```
Each task was evaluated on a single A100-80GB GPU.
ARC:
```
python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/arc_challenge_25shot.json --device cuda --num_fewshot 25
```
HellaSwag:
```
python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/hellaswag_10shot.json --device cuda --num_fewshot 10
```
MMLU:
```
python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/mmlu_5shot.json --device cuda --num_fewshot 5
```
TruthfulQA:
```
python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/truthfulqa_0shot.json --device cuda
```
## Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
# Citations
```bibtex
@software{hunterlee2023orcaplaty1
title = {OpenOrcaPlatypus: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset and Merged with divergent STEM and Logic Dataset Model},
author = {Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz and Bleys Goodson and Wing Lian and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B},
}
@article{platypus2023,
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
booktitle={arXiv preprint arxiv:2308.07317},
year={2023}
}
@software{OpenOrcaxOpenChatPreview2,
title = {OpenOrcaxOpenChatPreview2: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset},
author = {Guan Wang and Bleys Goodson and Wing Lian and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B},
}
@software{openchat,
title = {{OpenChat: Advancing Open-source Language Models with Imperfect Data}},
author = {Wang, Guan and Cheng, Sijie and Yu, Qiying and Liu, Changling},
doi = {10.5281/zenodo.8105775},
url = {https://github.com/imoneoi/openchat},
version = {pre-release},
year = {2023},
month = {7},
}
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint= arXiv 2307.09288
}
@misc{longpre2023flan,
title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
year={2023},
eprint={2301.13688},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
@article{hu2021lora,
title={LoRA: Low-Rank Adaptation of Large Language Models},
author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu},
journal={CoRR},
year={2021}
}
```
|
Sao10K/Stheno-Inverted-L2-13B | Sao10K | 2023-11-18T04:32:42Z | 1,629 | 3 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-31T15:40:21Z | ---
license: llama2
language:
- en
---
<img src="https://w.forfun.com/fetch/cb/cba2205390e517bea1ea60ca0b491af4.jpeg" style="width: 70%; min-width: 300px; display: block; margin: auto;">
The sister Model of [Stheno-L2-13B](https://huggingface.co/Sao10K/Stheno-L2-13B)
Stheno Inverted:
<br>Gradient Merge of Stheno-P2 & Stheno-P1, Models are in Inverted Positions
Quants courtesy of TheBloke!
<br>[GPTQ](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GPTQ)
<br>[GGUF](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGUF)
<br>[GGML](https://huggingface.co/TheBloke/Stheno-Inverted-L2-13B-GGML)
Test Checklist:
<br>Censorship - Fairly Uncensored
<br>Writing - Good Prose, Fairly Descriptive
<br>NSFW - Yes
<br>IQ Level - Pretty Smart
<br>Formatting - Proper Formatting with Examples
*Noticeable difference with Stheno-L2. From personal tests: A bit more verbose, a little less smart, and a little more forward with NSFW compared to regular Stheno.*
Stheno-P1 [Ties-Merge]
<br>-----[elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2)
<br>-----[jondurbin/airoboros-l2-13b-2.1](https://huggingface.co/jondurbin/airoboros-l2-13b-2.1)
<br>-----[NousResearch/Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b)+[nRuaif/Kimiko-v2 **LORA**](https://huggingface.co/nRuaif/Kimiko-v2-13B)
Stheno-P2 [Ties-Merge]
<br>-----[CalderaAI/13B-Legerdemain-L2](https://huggingface.co/CalderaAI/13B-Legerdemain-L2)+[lemonilia/limarp-llama2-v2 **LORA**](https://huggingface.co/lemonilia/limarp-llama2-v2)
<br>-----[ehartford/WizardLM-1.0-Uncensored-Llama2-13b](https://huggingface.co/ehartford/WizardLM-1.0-Uncensored-Llama2-13b)
<br>-----[Henk717/spring-dragon](https://huggingface.co/Henk717/spring-dragon)
Most formats could work, but my tests have all been done in Alpaca format and it works well.
```
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
Below is the Illustration for the Final Merge:

Once Again, thanks to [Chargoddard](https://huggingface.co/chargoddard) for his amazing and simple [ties-merge](https://github.com/cg123/ties-merge) script, and [Gryphe](https://huggingface.co/Gryphe) for their great [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) script.
Thank you to the original model creators too!
```
Art by wada_kazu / わだかず (pixiv page private?)
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Stheno-Inverted-L2-13B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 49.57 |
| ARC (25-shot) | 59.3 |
| HellaSwag (10-shot) | 82.9 |
| MMLU (5-shot) | 56.45 |
| TruthfulQA (0-shot) | 52.04 |
| Winogrande (5-shot) | 74.74 |
| GSM8K (5-shot) | 13.19 |
| DROP (3-shot) | 8.33 |
|
Sao10K/Stheno-1.2-L2-13B | Sao10K | 2023-11-18T04:34:32Z | 1,629 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-06T14:18:35Z | ---
license: llama2
language:
- en
---
***ONLY UPLOADED FROM RUNPOD JUST TO TEST ON OWN SYSTEM. UNTESTED SO FAR. V2 SOON***
***CURRENT CHANGES:***
<br>***INCREASED MODEL WEIGHTS AND DENSITIES IN TIES-MERGE FOR P1 & P2***
<br>***GRADIENT MERGE BETWEEN P1 & P2 CAN'T BE ILLUSTRATED, TENSORS EACH HAD UNIQUE RATIOS AND GRADIENTS APPLIED***
An experimental merging of Several Models using two various methods, [Ties-Merge](https://github.com/cg123/ties-merge) and [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient)
Stheno:
<br>Gradient Merge of Stheno-P1 & Stheno-P2.
Test Checklist:
<br>Censorship - ____
<br>Writing - ____
<br>NSFW - ___
<br>IQ Level - ___
<br>Formatting - ____
Most formats could work, use Alpaca format and it works well.
```
### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
### Response:
```
Gradient Merge Pictures Unavailable, Several Different Tensor Ratios applied.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Stheno-1.2-L2-13B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 54.8 |
| ARC (25-shot) | 60.75 |
| HellaSwag (10-shot) | 83.67 |
| MMLU (5-shot) | 56.27 |
| TruthfulQA (0-shot) | 50.32 |
| Winogrande (5-shot) | 74.98 |
| GSM8K (5-shot) | 10.92 |
| DROP (3-shot) | 46.72 |
|
TinyPixel/testmodel-3 | TinyPixel | 2023-09-22T11:34:24Z | 1,629 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-22T11:28:51Z | Entry not found |
gradientputri/MegaMix-T1-13B | gradientputri | 2023-10-02T10:21:34Z | 1,629 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-09-29T19:32:03Z | ---
license: llama2
language:
- en
---
Hi hi!
I don't remember what I did but I merged a dozen models or something :>
Pretty sure I did not use merged models, just finetuned ones so it's not stupid or something :?
Maybe.
My friendo tested it and said S1 was smartest but he liked A1 the most so u choose hehe
T1 is like the middle child :(((((((( |
CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down | CHIH-HUNG | 2023-10-08T01:00:51Z | 1,629 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-08T00:41:46Z | Entry not found |
speechlessai/speechless-coding-7b-16k-tora | speechlessai | 2023-11-27T06:33:30Z | 1,629 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"code",
"en",
"dataset:jondurbin/airoboros-2.2",
"dataset:Open-Orca/OpenOrca",
"dataset:garage-bAInd/Open-Platypus",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"dataset:TokenBender/python_eval_instruct_51k",
"license:llama2",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-11-27T01:04:28Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- jondurbin/airoboros-2.2
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
- WizardLM/WizardLM_evol_instruct_V2_196k
- TokenBender/python_eval_instruct_51k
tags:
- llama-2
- code
license: llama2
model-index:
- name: SpeechlessCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 52.439
verified: false
---
<p><h1> speechless-coding-7b-16k-tora </h1></p>
Use the following dataset to fine-tune llm_agents/tora-code-7b-v1.0 in order to improve the model's reasoning and planning abilities.
context window length: 16,384
prompt_type = "alpaca"
max_tokens > 128 && < 16384
>
Total 177,333 samples 316 MB
- jondurbin/airoboros-2.2: Filter categories related to coding, reasoning and planning. 21,923 samples.
- Open-Orca/OpenOrca: Filter the 'cot' category in 1M GPT4 dataset. 62,973 samples.
- garage-bAInd/Open-Platypus: 100%, 22,760 samples.
- WizardLM/WizardLM_evol_instruct_V2_196k: Coding coversation part. 30,081 samples
- TokenBender/python_eval_instruct_51k: “python” in output .39,596 samples
50 samples/T=0.2/MaxTokens=512/Top_P=0.95
Code: https://github.com/uukuguy/speechless
## HumanEval
| Metric | Value |
| --- | --- |
| humaneval-python | 52.44 |
[Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
CodeLlama-34B-Python: 53.29
CodeLlama-34B-Instruct: 50.79
CodeLlama-13B-Instruct: 50.6
CodeLlama-34B: 45.11
CodeLlama-13B-Python: 42.89
CodeLlama-13B: 35.07
## MultiPL-E
| Metric | Value |
| --- | --- |
| python | 55.96 |
| java | 37.84 |
| javascript | 46.93 |
| cpp | 37.48 |
| rust | 29.01 |
| go | 28.99 |
| sh | 12.11 |
| julia | 31.47 |
| typescript | 47.80 |
## LMEval
[Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric | Value |
| --- | --- |
| ARC | |
| HellaSwag | |
| MMLU | |
| TruthfulQA | |
| Average | |
## Parameters
| | |
|------ | ------ |
| lr | 2e-4 |
| lr_scheduler_type | cosine |
| weight_decay | 0.0 |
| optim | paged_adamw_8bit |
| flash_attention | True |
| rerope | False |
| max_new_tokens | 16384 |
| num_train_epochs | 2 |
| bits | 4 |
| lora_r | 64 |
| lora_alpha | 256 |
| lora_dropout | 0.05 |
| double_quant | True |
| quant_type | nf4 |
| dataset_format | sharegpt |
| mini_batch_size | 2 |
| grandient_accumulation_steps | 32 |
| bf16 | True |
A100-40G x 4
|
perlthoughts/Chupacabra-7B-v2.01 | perlthoughts | 2024-03-04T18:06:14Z | 1,629 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-04T18:36:48Z | ---
license: apache-2.0
model-index:
- name: Chupacabra-7B-v2.01
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.86
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2.01
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.12
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2.01
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2.01
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.5
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2.01
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2.01
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 59.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=perlthoughts/Chupacabra-7B-v2.01
name: Open LLM Leaderboard
---
# Chupacabra 7B v2.01
<p><img src="https://huggingface.co/perlthoughts/Chupacabra-7B/resolve/main/chupacabra7b%202.png" width=330></p>
### Model Description
Dare-ties merge method.
List of all models and merging path is coming soon.
## Purpose
Merging the "thick"est model weights from mistral models using amazing training methods like direct preference optimization (dpo) and reinforced learning.
I have spent countless hours studying the latest research papers, attending conferences, and networking with experts in the field. I experimented with different algorithms, tactics, fine-tuned hyperparameters, optimizers,
and optimized code until i achieved the best possible results.
Thank you openchat 3.5 for showing me the way.
Here is my contribution.
## Prompt Template
Replace {system} with your system prompt, and {prompt} with your prompt instruction.
```
### System:
{system}
### User:
{prompt}
### Assistant:
```
### Bug fixes
- Fixed issue with generation and the incorrect model weights. Model weights have been corrected and now generation works again. Reuploading GGUF to the GGUF repository as well as the AWQ versions.
- **Developed by:** Ray Hernandez
- **Model type:** Mistral
- **Language(s) (NLP):** English
- **License:** Apache 2.0
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_perlthoughts__Chupacabra-7B-v2.01)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.43|
|AI2 Reasoning Challenge (25-Shot)|68.86|
|HellaSwag (10-Shot) |86.12|
|MMLU (5-Shot) |63.90|
|TruthfulQA (0-shot) |63.50|
|Winogrande (5-shot) |80.51|
|GSM8k (5-shot) |59.67|
|
ConvexAI/Seraphim-8x10.7B-bf16 | ConvexAI | 2024-01-20T14:24:32Z | 1,629 | 3 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"moe",
"conversational",
"en",
"arxiv:2101.03961",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-19T17:08:12Z | ---
license: apache-2.0
language:
- en
tags:
- merge
- moe
---

# Seraphim
A frankenMoE using SOLAR 10.7B as the architecture. The models used are as follows:
- [bhavinjawade/SOLAR-10B-Nector-DPO-Jawade](https://huggingface.co/bhavinjawade/SOLAR-10B-Nector-DPO-Jawade) - base
- [bhavinjawade/SOLAR-10B-Nector-DPO-Jawade](https://huggingface.co/bhavinjawade/SOLAR-10B-Nector-DPO-Jawade) - expert #1
- [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) - expert #2
- [Sao10K/Sensualize-Solar-10.7B](https://huggingface.co/Sao10K/Sensualize-Solar-10.7B) - expert #3
- [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) - expert #4
- [bhavinjawade/SOLAR-10B-Nector-DPO-Jawade](https://huggingface.co/bhavinjawade/SOLAR-10B-Nector-DPO-Jawade) - expert #5
- [Sao10K/Sensualize-Solar-10.7B](https://huggingface.co/Sao10K/Sensualize-Solar-10.7B) - expert #6
- [Sao10K/SOLAR-10.7B-NahIdWin](https://huggingface.co/Sao10K/SOLAR-10.7B-NahIdWin) - expert #7
- [bhavinjawade/SOLAR-10B-Nector-DPO-Jawade](https://huggingface.co/bhavinjawade/SOLAR-10B-Nector-DPO-Jawade) - expert #8
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously. |
legraphista/RoLlama2-7b-Instruct-IMat-GGUF | legraphista | 2024-05-25T17:44:05Z | 1,629 | 0 | gguf | [
"gguf",
"quantized",
"GGUF",
"imatrix",
"quantization",
"text-generation",
"ro",
"base_model:OpenLLM-Ro/RoLlama2-7b-Instruct",
"license:cc-by-nc-4.0",
"region:us"
] | text-generation | 2024-05-25T17:03:09Z | ---
base_model: OpenLLM-Ro/RoLlama2-7b-Instruct
inference: false
language:
- ro
library_name: gguf
license: cc-by-nc-4.0
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- imatrix
- quantization
---
# RoLlama2-7b-Instruct-IMat-GGUF
_Llama.cpp imatrix quantization of RoLlama2-7b-Instruct-IMat-GGUF_
Original Model: [OpenLLM-Ro/RoLlama2-7b-Instruct](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Instruct)
Original dtype: `FP32` (`float32`)
Quantized by: llama.cpp [b2998](https://github.com/ggerganov/llama.cpp/releases/tag/b2998)
IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
## Files
### IMatrix
Status: ✅ Available
Link: [here](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [RoLlama2-7b-Instruct.Q8_0.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.Q8_0.gguf) | Q8_0 | 7.16GB | ✅ Available | ⚪ No | 📦 No
| [RoLlama2-7b-Instruct.Q6_K.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.Q6_K.gguf) | Q6_K | 5.53GB | ✅ Available | ⚪ No | 📦 No
| [RoLlama2-7b-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.Q4_K.gguf) | Q4_K | 4.08GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.Q3_K.gguf) | Q3_K | 3.30GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.Q2_K.gguf) | Q2_K | 2.53GB | ✅ Available | 🟢 Yes | 📦 No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [RoLlama2-7b-Instruct.FP16.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.FP16.gguf) | F16 | 13.48GB | ✅ Available | ⚪ No | 📦 No
| [RoLlama2-7b-Instruct.BF16.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.BF16.gguf) | BF16 | 13.48GB | ✅ Available | ⚪ No | 📦 No
| [RoLlama2-7b-Instruct.Q5_K.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.Q5_K.gguf) | Q5_K | 4.78GB | ✅ Available | ⚪ No | 📦 No
| [RoLlama2-7b-Instruct.Q5_K_S.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.Q5_K_S.gguf) | Q5_K_S | 4.65GB | ✅ Available | ⚪ No | 📦 No
| [RoLlama2-7b-Instruct.Q4_K_S.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.Q4_K_S.gguf) | Q4_K_S | 3.86GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.Q3_K_L.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.Q3_K_L.gguf) | Q3_K_L | 3.60GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.Q3_K_S.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.Q3_K_S.gguf) | Q3_K_S | 2.95GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.Q2_K_S.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.Q2_K_S.gguf) | Q2_K_S | 2.32GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.IQ4_NL.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.IQ4_NL.gguf) | IQ4_NL | 3.83GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.IQ4_XS.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.IQ4_XS.gguf) | IQ4_XS | 3.62GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.IQ3_M.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.IQ3_M.gguf) | IQ3_M | 3.11GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.IQ3_S.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.IQ3_S.gguf) | IQ3_S | 2.95GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.IQ3_XS.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.IQ3_XS.gguf) | IQ3_XS | 2.80GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.IQ3_XXS.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.IQ3_XXS.gguf) | IQ3_XXS | 2.59GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.IQ2_M.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.IQ2_M.gguf) | IQ2_M | 2.36GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.IQ2_S.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.IQ2_S.gguf) | IQ2_S | 2.20GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.IQ2_XS.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.IQ2_XS.gguf) | IQ2_XS | 2.03GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.IQ2_XXS.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.IQ2_XXS.gguf) | IQ2_XXS | 1.85GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.IQ1_M.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.IQ1_M.gguf) | IQ1_M | 1.65GB | ✅ Available | 🟢 Yes | 📦 No
| [RoLlama2-7b-Instruct.IQ1_S.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Instruct-IMat-GGUF/blob/main/RoLlama2-7b-Instruct.IQ1_S.gguf) | IQ1_S | 1.53GB | ✅ Available | 🟢 Yes | 📦 No
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download legraphista/RoLlama2-7b-Instruct-IMat-GGUF --include "RoLlama2-7b-Instruct.Q8_0.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/RoLlama2-7b-Instruct-IMat-GGUF --include "RoLlama2-7b-Instruct.Q8_0/*" --local-dir RoLlama2-7b-Instruct.Q8_0
# see FAQ for merging GGUF's
```
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `RoLlama2-7b-Instruct.Q8_0`)
3. Run `gguf-split --merge RoLlama2-7b-Instruct.Q8_0/RoLlama2-7b-Instruct.Q8_0-00001-of-XXXXX.gguf RoLlama2-7b-Instruct.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.