|
--- |
|
license: cc-by-nc-4.0 |
|
tags: |
|
- not-for-all-audiences |
|
- nsfw |
|
--- |
|
|
|
|
|
|
|
 |
|
|
|
An attempt using [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) on [Pygmalion2](https://huggingface.co/PygmalionAI/pygmalion-2-13b) to get better result. |
|
|
|
In addition, [LimaRP v3](https://huggingface.co/lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT) was used, is it recommanded to read the documentation. |
|
|
|
<!-- description start --> |
|
## Description |
|
|
|
This repo contains fp16 files of Emerald-13B. |
|
|
|
<!-- description end --> |
|
<!-- description start --> |
|
## Models and loras used |
|
|
|
- PygmalionAI/pygmalion-2-13b |
|
- The-Face-Of-Goonery/Huginn-13b-FP16 |
|
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT |
|
|
|
<!-- description end --> |
|
<!-- prompt-template start --> |
|
## Prompt template: Alpaca |
|
|
|
``` |
|
Below is an instruction that describes a task. Write a response that appropriately completes the request. |
|
|
|
### Instruction: |
|
{prompt} |
|
|
|
### Response: |
|
|
|
``` |
|
|
|
## LimaRP v3 usage and suggested settings |
|
|
|
 |
|
|
|
You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length: |
|
|
|
 |
|
|
|
Special thanks to Sushi. |
|
|
|
If you want to support me, you can [here](https://ko-fi.com/undiai). |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Emerald-13B) |
|
|
|
| Metric | Value | |
|
|-----------------------|---------------------------| |
|
| Avg. | 51.39 | |
|
| ARC (25-shot) | 62.29 | |
|
| HellaSwag (10-shot) | 83.69 | |
|
| MMLU (5-shot) | 55.7 | |
|
| TruthfulQA (0-shot) | 50.94 | |
|
| Winogrande (5-shot) | 75.93 | |
|
| GSM8K (5-shot) | 12.81 | |
|
| DROP (3-shot) | 18.38 | |
|
|