File size: 8,175 Bytes
9c5d15a 395fa13 9c5d15a 8f4272c 9c5d15a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 |
---
library_name: pytorch
license: llama3
pipeline_tag: text-generation
tags:
- llm
- generative_ai
- quantized
- android
---
![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/llama_v3_8b_chat_quantized/web-assets/model_demo.png)
# Llama-v3-8B-Chat: Optimized for Mobile Deployment
## State-of-the-art large language model useful on a variety of language understanding and generation tasks
Llama 3 is a family of LLMs. The "Chat" at the end indicates that the model is optimized for chatbot-like dialogue. The model is quantized to w4a16(4-bit weights and 16-bit activations) and part of the model is quantized to w8a16(8-bit weights and 16-bit activations) making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-KVCache-Quantized's latency.
This model is an implementation of Llama-v3-8B-Chat found [here](https://github.com/meta-llama/llama3/tree/main).
This repository provides scripts to run Llama-v3-8B-Chat on Qualcomm® devices.
More details on model performance across various devices, can be found
[here](https://aihub.qualcomm.com/models/llama_v3_8b_chat_quantized).
### Model Details
- **Model Type:** Text generation
- **Model Stats:**
- Number of parameters: 8B
- Precision: w4a16 + w8a16 (few layers)
- Num of key-value heads: 8
- Model-1 (Prompt Processor): Llama-PromptProcessor-Quantized
- Max context length: 1024
- Prompt processor model size: 4.8GB
- Prompt processor input: 1024 tokens
- Prompt processor output: 1024 output tokens + KVCache for token generator
- Model-2 (Token Generator): Llama-TokenGenerator-KVCache-Quantized
- Token generator model size: 4.8GB
- Token generator input: 1 input token + past KVCache
- Token generator output: 1 output token + KVCache for next iteration
- Decoding length: 1024 (1 output token + 1023 from KVCache)
- Use: Initiate conversation with prompt-processor and then token generator for subsequent iterations.
## Deploying Llama 3 on-device
Large Language Model (LLM) such as [Llama 2](https://llama.meta.com/llama3/) has the following complexities to deploy on-device:
1. Model size is too large to fit in device memory for inference
2. Multi-Head Attention (MHA) has large activations leading to fallback from accelerators
3. High model load and inference time
We can tackle the above constraints with the following steps:
1. Quantize weights to reduce on-disk model size, e.g., int8 or int4 weights
2. Quantize activations to reduce inference time memory pressure
3. Graph transformations to reduce inference time memory pressure, e.g., Multi-Head to Split-Head Attention (MHA -> SHA)
4. Graph transformations to convert or decompose operations into more accelerator friendly operations e.g. Linear to Conv
5. For LLM with 7B or more parameters, above steps are still not good enough on mobile,
hence we go one step further and split model into sub-parts.
Here, we divide the model into 4 parts in order to
1. Make model exportable with low memory usage
2. Avoid inference time out-of-memory errors
In order to export Llama 3, please ensure
1. Host machine has >40GB memory (RAM+swap-space)
2. If you don't have enough memory, export.py will dump instructions to increase swap space accordingly
## Sample output prompts generated on-device
1. --prompt "where is California?"
```
------- Response Summary --------
Prompt: where is California?
Response: California is a state located on the West Coast of
```
2. --prompt "what is 2+3?" --max-output-tokens 30
```
-------- Response Summary --------
Prompt: what is 2+3?
Response: 2 + 3 = 5
```
3. --prompt "what is superposition in Quantum Physics?" --max-output-tokens 30
```
Prompt: what is superposition in Quantum Physics?
Response: Superposition is a fundamental concept in quantum mechanics, which is a branch of physics that studies the behavior of matter and energy at a very
```
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
| ---|---|---|---|---|---|---|---|
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 99.315 ms | 33 - 35 MB | UINT16 | NPU | Llama3-TokenGenerator-KVCache-Quantized
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1807.176 ms | 11 - 13 MB | UINT16 | NPU | Llama3-PromptProcessor-Quantized
## Installation
This model can be installed as a Python package via pip.
```bash
pip install "qai-hub-models[llama_v3_8b_chat_quantized]"
```
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
With this API token, you can configure your client to run models on the cloud
hosted devices.
```bash
qai-hub configure --api_token API_TOKEN
```
Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
## Demo off target
The package contains a simple end-to-end demo that downloads pre-trained
weights and runs this model on a sample input.
```bash
python -m qai_hub_models.models.llama_v3_8b_chat_quantized.demo
```
The above demo runs a reference implementation of pre-processing, model
inference, and post processing.
**NOTE**: If you want running in a Jupyter Notebook or Google Colab like
environment, please add the following to your cell (instead of the above).
```
%run -m qai_hub_models.models.llama_v3_8b_chat_quantized.demo
```
### Run model on a cloud-hosted device
In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
device. This script does the following:
* Performance check on-device on a cloud-hosted device
* Downloads compiled assets that can be deployed on-device for Android.
* Accuracy check between PyTorch and on-device outputs.
```bash
python -m qai_hub_models.models.llama_v3_8b_chat_quantized.export
```
```
Profile Job summary of Llama3-TokenGenerator-KVCache-Quantized
--------------------------------------------------
Device: Snapdragon X Elite CRD (11)
Estimated Inference Time: 79.17 ms
Estimated Peak Memory Range: 16.26-16.26 MB
Compute Units: NPU (20765) | Total (20765)
Profile Job summary of Llama3-PromptProcessor-Quantized
--------------------------------------------------
Device: Snapdragon X Elite CRD (11)
Estimated Inference Time: 1668.29 ms
Estimated Peak Memory Range: 10.30-10.30 MB
Compute Units: NPU (20248) | Total (20248)
```
## Deploying compiled model to Android
The models can be deployed using multiple runtimes:
- TensorFlow Lite (`.tflite` export): [This
tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
guide to deploy the .tflite model in an Android application.
- QNN (`.so` export ): This [sample
app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
provides instructions on how to use the `.so` shared library in an Android application.
## View on Qualcomm® AI Hub
Get more details on Llama-v3-8B-Chat's performance across various devices [here](https://aihub.qualcomm.com/models/llama_v3_8b_chat_quantized).
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
## License
- The license for the original implementation of Llama-v3-8B-Chat can be found
[here](https://github.com/facebookresearch/llama/blob/main/LICENSE).
- The license for the compiled assets for on-device deployment can be found [here](https://github.com/facebookresearch/llama/blob/main/LICENSE)
## References
* [LLaMA: Open and Efficient Foundation Language Models](https://ai.meta.com/blog/meta-llama-3/)
* [Source Model Implementation](https://github.com/meta-llama/llama3/tree/main)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:[email protected]).
|