File size: 3,228 Bytes
bb85c36 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
---
language:
- en
pipeline_tag: text-generation
tags:
- shining-valiant
- shining-valiant-2
- valiant
- valiant-labs
- llama
- llama-3.1
- llama-3.1-instruct
- llama-3.1-instruct-8b
- llama-3
- llama-3-instruct
- llama-3-instruct-8b
- 8b
- conversational
- chat
- instruct
model_type: llama
license: llama3.1
---
![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
# QuantFactory/Llama3.1-8B-ShiningValiant2-GGUF
This is quantized version of [ValiantLabs/Llama3.1-8B-ShiningValiant2](https://huggingface.co/ValiantLabs/Llama3.1-8B-ShiningValiant2) created using llama.cpp
# Original Model Card
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/EXX7TKbB-R6arxww2mk0R.jpeg)
Shining Valiant 2 is a chat model built on Llama 3.1 8b, finetuned on our data for friendship, insight, knowledge and enthusiasm.
- Finetuned on [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) for best available general performance
- Trained on our data, focused on science, engineering, technical knowledge, and structured reasoning
## Version
This is the **2024-08-06** release of Shining Valiant 2 for Llama 3.1 8b.
Our newest dataset improves specialist knowledge and response consistency.
Help us and recommend Shining Valiant 2 to your friends!
## Prompting Guide
Shining Valiant 2 uses the [Llama 3.1 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) prompt format. The example script below can be used as a starting point for general chat:
import transformers
import torch
model_id = "ValiantLabs/Llama3.1-8B-ShiningValiant2"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are Shining Valiant, a highly capable chat AI."},
{"role": "user", "content": "Describe the role of transformation matrices in 3D graphics."}
]
outputs = pipeline(
messages,
max_new_tokens=1024,
)
print(outputs[0]["generated_text"][-1])
## The Model
Shining Valiant 2 is built on top of Llama 3.1 8b Instruct.
The current version of Shining Valiant 2 is trained mostly on our private Shining Valiant data, supplemented by [LDJnr/Pure-Dove](https://huggingface.co/datasets/LDJnr/Pure-Dove) for response flexibility.
Our private data adds specialist knowledge and Shining Valiant's personality: she's friendly, enthusiastic, insightful, knowledgeable, and loves to learn! Magical.
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg)
Shining Valiant 2 is created by [Valiant Labs.](http://valiantlabs.ca/)
[Check out our HuggingFace page for Fireplace 2 and our other models!](https://huggingface.co/ValiantLabs)
[Follow us on X for updates on our models!](https://twitter.com/valiant_labs)
We care about open source.
For everyone to use.
We encourage others to finetune further from our models.
|