Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -15,11 +15,13 @@ tags:
|
|
15 |
# Llama-v2-7B-Chat: Optimized for Mobile Deployment
|
16 |
## State-of-the-art large language model useful on a variety of language understanding and generation tasks
|
17 |
|
|
|
18 |
Llama 2 is a family of LLMs. The "Chat" at the end indicates that the model is optimized for chatbot-like dialogue. The model is quantized to w4a16(4-bit weights and 16-bit activations) and part of the model is quantized to w8a16(8-bit weights and 16-bit activations) making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-KVCache-Quantized's latency.
|
19 |
|
20 |
-
This is
|
21 |
-
|
22 |
-
|
|
|
23 |
|
24 |
### Model Details
|
25 |
|
|
|
15 |
# Llama-v2-7B-Chat: Optimized for Mobile Deployment
|
16 |
## State-of-the-art large language model useful on a variety of language understanding and generation tasks
|
17 |
|
18 |
+
|
19 |
Llama 2 is a family of LLMs. The "Chat" at the end indicates that the model is optimized for chatbot-like dialogue. The model is quantized to w4a16(4-bit weights and 16-bit activations) and part of the model is quantized to w8a16(8-bit weights and 16-bit activations) making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-KVCache-Quantized's latency.
|
20 |
|
21 |
+
This model is an implementation of Posenet-Mobilenet found [here](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf).
|
22 |
+
|
23 |
+
|
24 |
+
More details on model performance accross various devices, can be found [here](https://aihub.qualcomm.com/models/llama_v2_7b_chat_quantized).
|
25 |
|
26 |
### Model Details
|
27 |
|