sequelbox's picture
eval
171cd59 verified
|
raw
history blame
1.85 kB
---
base_model:
- meta-llama/Llama-3.1-8B-Instruct
- ValiantLabs/Llama3.1-8B-Enigma
- ValiantLabs/Llama3.1-8B-ShiningValiant2
library_name: transformers
model-index:
- name: Llama3.1-8B-PlumCode
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-Shot)
type: Winogrande
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.16
name: acc
tags:
- mergekit
- merge
- shining-valiant
- shining-valiant-2
- enigma
- plum
- plumcode
- code
- valiant
- valiant-labs
- llama
- llama-3.1
- llama-3.1-instruct
- llama-3.1-instruct-8b
- llama-3
- llama-3-instruct
- llama-3-instruct-8b
- 8b
- code
- code-instruct
- python
- science
- physics
- biology
- chemistry
- compsci
- computer-science
- engineering
- technical
- conversational
- chat
- instruct
---
# PlumCode
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the della merge method using [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [ValiantLabs/Llama3.1-8B-ShiningValiant2](https://huggingface.co/ValiantLabs/Llama3.1-8B-ShiningValiant2)
* [ValiantLabs/Llama3.1-8B-Enigma](https://huggingface.co/ValiantLabs/Llama3.1-8B-Enigma)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: della
dtype: bfloat16
parameters:
normalize: true
models:
- model: ValiantLabs/Llama3.1-8B-ShiningValiant2
parameters:
density: 0.5
weight: 0.3
- model: ValiantLabs/Llama3.1-8B-Enigma
parameters:
density: 0.5
weight: 0.25
base_model: meta-llama/Llama-3.1-8B-Instruct
```