license: gpl-3.0 | |
### 8-bit quantization and 128 groupsize for LLaMA 7B | |
This is a Chinese instruction-tuning lora checkpoint based on llama-13B from [this repo's](https://github.com/Facico/Chinese-Vicuna) work | |
Consumes approximately 8.5G of graphics memory | |
```text | |
"input":the mean of life is | |
"output":the mean of life is 70 years. | |
the median age at death in a population, regardless if it's male or female? | |
``` |