korywat commited on
Commit
50ecce6
1 Parent(s): 395fa13

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +47 -142
README.md CHANGED
@@ -15,55 +15,40 @@ tags:
15
  # Llama-v3-8B-Chat: Optimized for Mobile Deployment
16
  ## State-of-the-art large language model useful on a variety of language understanding and generation tasks
17
 
18
- Llama 3 is a family of LLMs. The "Chat" at the end indicates that the model is optimized for chatbot-like dialogue. The model is quantized to w4a16(4-bit weights and 16-bit activations) and part of the model is quantized to w8a16(8-bit weights and 16-bit activations) making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-KVCache-Quantized's latency.
19
-
20
- This model is an implementation of Llama-v3-8B-Chat found [here](https://github.com/meta-llama/llama3/tree/main).
21
- This repository provides scripts to run Llama-v3-8B-Chat on Qualcomm® devices.
22
- More details on model performance across various devices, can be found
23
- [here](https://aihub.qualcomm.com/models/llama_v3_8b_chat_quantized).
24
 
 
 
 
25
 
26
  ### Model Details
27
 
28
  - **Model Type:** Text generation
29
  - **Model Stats:**
 
30
  - Number of parameters: 8B
 
31
  - Precision: w4a16 + w8a16 (few layers)
32
  - Num of key-value heads: 8
33
  - Model-1 (Prompt Processor): Llama-PromptProcessor-Quantized
34
- - Max context length: 1024
35
- - Prompt processor model size: 4.8GB
36
- - Prompt processor input: 1024 tokens
37
- - Prompt processor output: 1024 output tokens + KVCache for token generator
38
- - Model-2 (Token Generator): Llama-TokenGenerator-KVCache-Quantized
39
- - Token generator model size: 4.8GB
40
- - Token generator input: 1 input token + past KVCache
41
- - Token generator output: 1 output token + KVCache for next iteration
42
- - Decoding length: 1024 (1 output token + 1023 from KVCache)
43
  - Use: Initiate conversation with prompt-processor and then token generator for subsequent iterations.
44
 
45
- ## Deploying Llama 3 on-device
46
-
47
- Large Language Model (LLM) such as [Llama 2](https://llama.meta.com/llama3/) has the following complexities to deploy on-device:
48
- 1. Model size is too large to fit in device memory for inference
49
- 2. Multi-Head Attention (MHA) has large activations leading to fallback from accelerators
50
- 3. High model load and inference time
51
-
52
- We can tackle the above constraints with the following steps:
53
- 1. Quantize weights to reduce on-disk model size, e.g., int8 or int4 weights
54
- 2. Quantize activations to reduce inference time memory pressure
55
- 3. Graph transformations to reduce inference time memory pressure, e.g., Multi-Head to Split-Head Attention (MHA -> SHA)
56
- 4. Graph transformations to convert or decompose operations into more accelerator friendly operations e.g. Linear to Conv
57
- 5. For LLM with 7B or more parameters, above steps are still not good enough on mobile,
58
- hence we go one step further and split model into sub-parts.
59
 
60
- Here, we divide the model into 4 parts in order to
61
- 1. Make model exportable with low memory usage
62
- 2. Avoid inference time out-of-memory errors
63
-
64
- In order to export Llama 3, please ensure
65
- 1. Host machine has >40GB memory (RAM+swap-space)
66
- 2. If you don't have enough memory, export.py will dump instructions to increase swap space accordingly
67
 
68
  ## Sample output prompts generated on-device
69
  1. --prompt "where is California?"
@@ -88,119 +73,39 @@ Response: Superposition is a fundamental concept in quantum mechanics, which is
88
 
89
 
90
 
91
- | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
92
- | ---|---|---|---|---|---|---|---|
93
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 99.315 ms | 33 - 35 MB | UINT16 | NPU | Llama3-TokenGenerator-KVCache-Quantized
94
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1807.176 ms | 11 - 13 MB | UINT16 | NPU | Llama3-PromptProcessor-Quantized
95
-
96
-
97
-
98
- ## Installation
99
-
100
- This model can be installed as a Python package via pip.
101
-
102
- ```bash
103
- pip install "qai-hub-models[llama_v3_8b_chat_quantized]"
104
- ```
105
-
106
-
107
-
108
- ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
109
-
110
- Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
111
- Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
112
-
113
- With this API token, you can configure your client to run models on the cloud
114
- hosted devices.
115
- ```bash
116
- qai-hub configure --api_token API_TOKEN
117
- ```
118
- Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
119
-
120
-
121
-
122
- ## Demo off target
123
-
124
- The package contains a simple end-to-end demo that downloads pre-trained
125
- weights and runs this model on a sample input.
126
-
127
- ```bash
128
- python -m qai_hub_models.models.llama_v3_8b_chat_quantized.demo
129
- ```
130
-
131
- The above demo runs a reference implementation of pre-processing, model
132
- inference, and post processing.
133
-
134
- **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
135
- environment, please add the following to your cell (instead of the above).
136
- ```
137
- %run -m qai_hub_models.models.llama_v3_8b_chat_quantized.demo
138
- ```
139
-
140
-
141
- ### Run model on a cloud-hosted device
142
-
143
- In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
144
- device. This script does the following:
145
- * Performance check on-device on a cloud-hosted device
146
- * Downloads compiled assets that can be deployed on-device for Android.
147
- * Accuracy check between PyTorch and on-device outputs.
148
-
149
- ```bash
150
- python -m qai_hub_models.models.llama_v3_8b_chat_quantized.export
151
- ```
152
-
153
- ```
154
- Profile Job summary of Llama3-TokenGenerator-KVCache-Quantized
155
- --------------------------------------------------
156
- Device: Snapdragon X Elite CRD (11)
157
- Estimated Inference Time: 79.17 ms
158
- Estimated Peak Memory Range: 16.26-16.26 MB
159
- Compute Units: NPU (20765) | Total (20765)
160
-
161
- Profile Job summary of Llama3-PromptProcessor-Quantized
162
- --------------------------------------------------
163
- Device: Snapdragon X Elite CRD (11)
164
- Estimated Inference Time: 1668.29 ms
165
- Estimated Peak Memory Range: 10.30-10.30 MB
166
- Compute Units: NPU (20248) | Total (20248)
167
-
168
-
169
- ```
170
-
171
-
172
-
173
-
174
-
175
- ## Deploying compiled model to Android
176
-
177
-
178
- The models can be deployed using multiple runtimes:
179
- - TensorFlow Lite (`.tflite` export): [This
180
- tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
181
- guide to deploy the .tflite model in an Android application.
182
-
183
-
184
- - QNN (`.so` export ): This [sample
185
- app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
186
- provides instructions on how to use the `.so` shared library in an Android application.
187
-
188
 
189
- ## View on Qualcomm® AI Hub
190
- Get more details on Llama-v3-8B-Chat's performance across various devices [here](https://aihub.qualcomm.com/models/llama_v3_8b_chat_quantized).
191
- Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
192
 
193
- ## License
194
- - The license for the original implementation of Llama-v3-8B-Chat can be found
195
- [here](https://github.com/facebookresearch/llama/blob/main/LICENSE).
196
- - The license for the compiled assets for on-device deployment can be found [here](https://github.com/facebookresearch/llama/blob/main/LICENSE)
197
 
198
  ## References
199
  * [LLaMA: Open and Efficient Foundation Language Models](https://ai.meta.com/blog/meta-llama-3/)
200
  * [Source Model Implementation](https://github.com/meta-llama/llama3/tree/main)
201
 
 
 
202
  ## Community
203
- * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
204
  * For questions or feedback please [reach out to us](mailto:[email protected]).
205
 
206
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  # Llama-v3-8B-Chat: Optimized for Mobile Deployment
16
  ## State-of-the-art large language model useful on a variety of language understanding and generation tasks
17
 
18
+ Llama 3 is a family of LLMs. The "Chat" at the end indicates that the model is optimized for chatbot-like dialogue. The model is quantized to w4a16 (4-bit weights and 16-bit activations) and part of the model is quantized to w8a16 (8-bit weights and 16-bit activations) making it suitable for on-device deployment. For Prompt and output length specified below, the time to first token is Llama-PromptProcessor-Quantized's latency and average time per addition token is Llama-TokenGenerator-Quantized's latency.
 
 
 
 
 
19
 
20
+ This is based on the implementation of Llama-v3-8B-Chat found
21
+ [here]({source_repo}). More details on model performance
22
+ accross various devices, can be found [here](https://aihub.qualcomm.com/models/llama_v3_8b_chat_quantized).
23
 
24
  ### Model Details
25
 
26
  - **Model Type:** Text generation
27
  - **Model Stats:**
28
+ - Context length: 4096
29
  - Number of parameters: 8B
30
+ - Model size: 4.8GB
31
  - Precision: w4a16 + w8a16 (few layers)
32
  - Num of key-value heads: 8
33
  - Model-1 (Prompt Processor): Llama-PromptProcessor-Quantized
34
+ - Prompt processor input: 128 tokens + position embeddings + attention mask + KV cache inputs
35
+ - Prompt processor output: 128 output tokens + KV cache outputs
36
+ - Model-2 (Token Generator): Llama-TokenGenerator-Quantized
37
+ - Token generator input: 1 input token + position embeddings + attention mask + KV cache inputs
38
+ - Token generator output: 1 output token + KV cache outputs
 
 
 
 
39
  - Use: Initiate conversation with prompt-processor and then token generator for subsequent iterations.
40
 
41
+ | Model | Device | Chipset | Target Runtime | Response Rate (Tokens/Second) | Time To First Token (TTFT) Range (Seconds) | Evaluation |
42
+ |---|---|---|---|---|---|---|
43
+ | Llama-v3-8B-Chat | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 66.14 | (0.028, 0.92) | -- | -- |
44
+ | Llama-v3-8B-Chat | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 66.14 | (0.028, 0.92) | -- | -- |
45
+ | Llama-v3-8B-Chat | Samsung Galaxy S23 Ultra | Snapdragon® 8 Gen 2 | QNN | 66.14 | (0.028, 0.92) | -- | -- |
46
+ | Llama-v3-8B-Chat | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 66.14 | (0.028, 0.92) | -- | -- |
47
+ | Llama-v3-8B-Chat | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 66.14 | (0.028, 0.92) | -- | -- |
 
 
 
 
 
 
 
48
 
49
+ ## Deploying Llama 3 on-device
50
+ Please follow [this tutorial](https://github.com/quic/ai-hub-apps/tree/main/tutorials/llama)
51
+ to compile QNN binaries and generate bundle assets to run [ChatApp on Windows](https://github.com/quic/ai-hub-apps/tree/main/apps/windows/cpp/ChatApp) and on Android powered by QNN-Genie.
 
 
 
 
52
 
53
  ## Sample output prompts generated on-device
54
  1. --prompt "where is California?"
 
73
 
74
 
75
 
76
+ ## License
77
+ * The license for the original implementation of Llama-v3-8B-Chat can be found [here](https://github.com/facebookresearch/llama/blob/main/LICENSE).
78
+ * The license for the compiled assets for on-device deployment can be found [here](https://github.com/facebookresearch/llama/blob/main/LICENSE)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
 
 
 
 
80
 
 
 
 
 
81
 
82
  ## References
83
  * [LLaMA: Open and Efficient Foundation Language Models](https://ai.meta.com/blog/meta-llama-3/)
84
  * [Source Model Implementation](https://github.com/meta-llama/llama3/tree/main)
85
 
86
+
87
+
88
  ## Community
89
+ * Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI.
90
  * For questions or feedback please [reach out to us](mailto:[email protected]).
91
 
92
+ ## Usage and Limitations
93
+
94
+ Model may not be used for or in connection with any of the following applications:
95
+
96
+ - Accessing essential private and public services and benefits;
97
+ - Administration of justice and democratic processes;
98
+ - Assessing or recognizing the emotional state of a person;
99
+ - Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics;
100
+ - Education and vocational training;
101
+ - Employment and workers management;
102
+ - Exploitation of the vulnerabilities of persons resulting in harmful behavior;
103
+ - General purpose social scoring;
104
+ - Law enforcement;
105
+ - Management and operation of critical infrastructure;
106
+ - Migration, asylum and border control management;
107
+ - Predictive policing;
108
+ - Real-time remote biometric identification in public spaces;
109
+ - Recommender systems of social media platforms;
110
+ - Scraping of facial images (from the internet or otherwise); and/or
111
+ - Subliminal manipulation