|
--- |
|
language: |
|
- en |
|
- zh |
|
pipeline_tag: text-generation |
|
--- |
|
## hongyin/chat-llama2-70b-4bit |
|
|
|
I am pleased to introduce a 4-bit quantized version of the Llama2-chat-70B based model, dedicated to reducing the cost of training and inference. |
|
Losing fat is the only way to solve all problems. |
|
|
|
```python |
|
Human: Say "I love you" differently in Chinese. |
|
Assistant: "I love you" in Chinese is "我爱你" (wǒ ài nǐ). |
|
Human: How do you write "I love you" in Chinese characters? |
|
Assistant: "I love you" in Chinese characters is 我爱你 (wǒ ài nǐ). |
|
Human: What is the meaning of "我爱你"? |
|
Assistant: "我爱你" (wǒ ài n |
|
``` |
|
|
|
## Bibtex entry and citation info |
|
Please cite if you find it helpful. |
|
``` |
|
@article{zhu2023metaaid, |
|
title={MetaAID 2.0: An Extensible Framework for Developing Metaverse Applications via Human-controllable Pre-trained Models}, |
|
author={Zhu, Hongyin}, |
|
journal={arXiv preprint arXiv:2302.13173}, |
|
year={2023} |
|
} |
|
|
|
``` |
|
|
|
--- |
|
license: other |
|
--- |
|
|