File size: 1,213 Bytes
2971ec1
 
 
 
 
 
 
 
 
 
 
 
12691f8
2971ec1
 
 
 
 
 
 
 
 
 
 
 
 
 
7bafe10
2971ec1
68c8d6f
2971ec1
bf92713
2971ec1
bf92713
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
license: other
tags:
- llama
- pytorch
- chatbot
- storywriting
- generalist-model
---

# chronos-13b-v2

This is the 4bit GPTQ of **chronos-13b-v2** based on the **LLaMA v2 Base** model. It works with Exllama and AutoGPTQ.

This model is primarily focused on chat, roleplay, storywriting, with good reasoning and logic.

Chronos can generate very long outputs with coherent text, largely due to the human inputs it was trained on, and it supports context length up to 4096 tokens.

This model uses Alpaca formatting, so for optimal model performance, use and either use a frontend like SillyTavern, or continue your story with it:
```
### Instruction:
Your instruction or question here.
### Response:
```
Not using the format will make the model perform significantly worse than intended.

## Other Versions
[Original FP16 Model](https://huggingface.co/elinas/chronos-13b-v2)

[GGML Versions provided by @TheBloke](https://huggingface.co/TheBloke/Chronos-13B-v2-GGML)

**Support My Development of New Models**
<a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;' 
src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>