leaderboard-pr-bot's picture
Adding Evaluation Results
a59f688
|
raw
history blame
1.75 kB
metadata
license: apache-2.0
datasets:
  - Fredithefish/openassistant-guanaco-unfiltered
language:
  - en
library_name: transformers
pipeline_tag: conversational
inference: false
Alt Text

✨ Guanaco - 7B - Uncensored ✨

Guanaco-7B-Uncensored has been fine-tuned for 4 epochs on the Unfiltered Guanaco Dataset. using Llama-2-7b as the base model.
The model does not perform well with languages other than English.
Please note: This model is designed to provide responses without content filtering or censorship. It generates answers without denials.

Special thanks

I would like to thank AutoMeta for providing me with the computing power necessary to train this model.

Prompt Template

### Human: {prompt} ### Assistant:

Dataset

The model has been fine-tuned on the V2 of the Guanaco unfiltered dataset.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 43.13
ARC (25-shot) 52.13
HellaSwag (10-shot) 78.77
MMLU (5-shot) 43.42
TruthfulQA (0-shot) 44.45
Winogrande (5-shot) 73.09
GSM8K (5-shot) 4.25
DROP (3-shot) 5.82