grimjim's picture
Update README.md
f855f83 verified
---
language:
- en
base_model: grimjim/llama-3-experiment-v1-9B
quanted_by: grimjim
library_name: transformers
tags:
- meta
- llama-3
- pytorch
license: llama3
license_link: LICENSE
pipeline_tag: text-generation
widget:
- example_title: Hello
messages:
- role: user
content: Hey my name is Corwin! How are you?
- example_title: Hellriding out of Amber
messages:
- role: system
content: You are a helpful and honest assistant. Please, respond concisely and truthfully.
- role: user
content: Can you recommend a good destination for a hellride out of Amber?
inference:
parameters:
max_new_tokens: 300
stop:
- <|end_of_text|>
- <|eot_id|>
---
# llama-3-experiment-v1-9B-GGUF
This is an experimental merge, replicating additional layers to the model without post-merge healing. There is damage to the model, but it appears to be tolerable as is. The resulting impact on narrative text completion may be of interest.
Light testing performed with instruct prompting and the following sampler settings:
- temp=1 and minP=0.02
- temp=1 and smoothing factor=0.33
Full weights: [grimjim/llama-3-experiment-v1-9B](https://huggingface.co/grimjim/llama-3-experiment-v1-9B)
GGUF quants: [grimjim/llama-3-experiment-v1-9B-GGUF](https://huggingface.co/grimjim/llama-3-experiment-v1-9B-GGUF)
This is a merge of pre-trained language model meta-llama/Meta-Llama-3-8B-Instruct created using [mergekit](https://github.com/cg123/mergekit).
Built with Meta Llama 3.
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* meta-llama/Meta-Llama-3-8B-Instruct
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range: [0, 12]
- sources:
- model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```