bpawar commited on
Commit
9ca7847
1 Parent(s): b495eed

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -6
README.md CHANGED
@@ -1,6 +1,122 @@
1
- ---
2
- license: other
3
- license_name: nvidia-open-model-license
4
- license_link: >-
5
- https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
6
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: nvidia-open-model-license
4
+ license_link: >-
5
+ https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
6
+ base_model:
7
+ - meta-llama/Llama-3.1-8B-Instruct
8
+ ---
9
+
10
+ # Meta Llama 3.1 8B Instruct ONNX INT4
11
+
12
+ ## Model Developer: Meta
13
+
14
+ ## Model Description
15
+
16
+ The Llama 3.1 8B Instruct ONNX INT4 model is the AWQ quantized version of the Meta Llama-3.1-8B-Instruct model, which is an auto-regressive language model that uses an optimized transformer architecture for multilingual dialogue use cases. For more information, please check [here](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). The Llama 3.1 8B Instruct ONNX INT4 model is quantized with [TensorRT Model Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer).
17
+
18
+ This model is ready for commercial and research use case.
19
+
20
+ Steps followed to generate this quantized model:
21
+
22
+ * 1. Download Meta Llama-3.1-8B-Instruct model in Pytorch bfloat16 format from HuggingFace.
23
+
24
+ * 2. Convert PyTorch model to ONNX FP16 using onnxruntime-genai model builder.
25
+
26
+ * 3. Quantize Llama-3.1-8B-Instruct ONNX FP16 model to Llama-3.1-8B ONNX INT4 AWQ model using TensorRT Model Optimizer – Windows.
27
+
28
+ ## Third-Party Community Consideration
29
+ This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to the Non-NVIDIA [Meta-Llama-3.1-8B-Instruct Model Card](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
30
+
31
+
32
+ ## License/Terms of Use:
33
+ GOVERNING TERMS: Use of this model is governed by the NVIDIA Open Model License Agreement (found at https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf). ADDITIONAL INFORMATION: Llama 3.1 Community License Agreement (found at https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE), Built with Llama.
34
+
35
+ ## Reference:
36
+
37
+ Meta Llama 3.1 [Model Card](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on Hugging face
38
+
39
+ [The Llama 3 Herd of Models](https://ai.meta.com/research/publications/the-llama-3-herd-of-models/)
40
+
41
+ Meta Llama 3 [blogpost](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1/)
42
+
43
+ ## Model Architecture:
44
+
45
+ Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
46
+
47
+ **Architecture Type:** Transformer <br>
48
+
49
+ **Network Architecture:** Llama 3.1 <br>
50
+
51
+ **Input**
52
+
53
+ * Input Type: Text
54
+
55
+ * Input Format: String
56
+
57
+ * Input Parameters: Sequence (1D)
58
+
59
+ * Other Properties Related to Input: Supports English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai
60
+
61
+ **Output**
62
+
63
+ * Output Type: Text
64
+
65
+ * Output Format: String
66
+
67
+ * Output Parameters: Sequence (1D)
68
+
69
+ ## Software Integration:
70
+
71
+ * **Supported Hardware Microarchitecture Compatibility :** Nvidia Ampere and newer GPUs. 6GB or higher VRAM GPUs are recommended. Higher VRAM may be required for larger context length use cases. 
72
+
73
+ * **Supported Operating System(s):**  Windows 
74
+
75
+ ## Model Version(s):  1.0 
76
+
77
+ ## Training, Testing and Evaluation Datasets:  
78
+ Refer to [Meta-Llama-3.1-8B-Instruct Model Card](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) for the details.
79
+
80
+ ### Calibration Dataset: cnn_daily mail used for calibration.
81
+
82
+ Link: https://huggingface.co/datasets/abisee/cnn_dailymail
83
+
84
+ * Data Collection Method by dataset: Automated
85
+
86
+ * Labeling Method by dataset: [Unknown]
87
+
88
+ ### Evaluation Dataset:
89
+
90
+ Link: https://people.eecs.berkeley.edu/~hendrycks/data.tar
91
+
92
+ * Data Collection Method by dataset  - Unknown
93
+
94
+ * Labeling Method by dataset  - Not Applicable
95
+
96
+ ## Evaluation Results:
97
+
98
+ **MMLU (5# shots):**
99
+
100
+ With GenAI ORT->DML backend, we got below mentioned accuracy numbers on a desktop RTX 4090 GPU system. 
101
+
102
+ "overall_accuracy": 66.1
103
+
104
+ **Test configuration:**
105
+
106
+ * **GPU:** RTX 4090  
107
+
108
+ * **Windows 11:** 23H2
109
+
110
+ * **NVIDIA Graphics driver:** R565 or higher
111
+
112
+ ## Inference:
113
+ Inference Backend: [Onnxruntime-GenAI-DirectML](https://onnxruntime.ai/docs/genai/howto/install.html#directml)
114
+
115
+ We used GenAI ORT->DML backend for inference. The instructions to use this backend are given in readme.txt file available under Files section. 
116
+
117
+
118
+ ## Ethical Considerations:
119
+
120
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications.  When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. 
121
+
122
+ Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).