Modify the README
Browse files
README.md
CHANGED
@@ -6,24 +6,15 @@ tags:
|
|
6 |
|
7 |
# Description
|
8 |
|
9 |
-
This is a test model for
|
10 |
It was exported from [stabilityai/stablelm-3b-4e1t](https://huggingface.co/stabilityai/stablelm-3b-4e1t) to ONNX with a [modified version](https://github.com/huggingface/optimum/pull/1719) of Hugging Face Optimum.
|
11 |
It is quite possible to have problems.
|
12 |
|
13 |
This model does not include a tokenizer.
|
14 |
|
15 |
-
#
|
16 |
|
17 |
-
This model was exported with
|
18 |
-
```
|
19 |
-
optimum-cli export onnx --model stabilityai/stablelm-3b-4e1t --trust-remote-code --device cpu --optimize O1 output/onnx-fp32/
|
20 |
-
```
|
21 |
|
22 |
-
|
23 |
|
24 |
-
# Output from Optimum CLI
|
25 |
-
|
26 |
-
```
|
27 |
-
The ONNX export succeeded with the warning: The maximum absolute difference between the output of the reference model and the ONNX exported model is not within the set tolerance 1e-05:
|
28 |
-
- logits: max diff = 2.6553869247436523e-05.
|
29 |
-
```
|
|
|
6 |
|
7 |
# Description
|
8 |
|
9 |
+
This is a test model for [our test scripts](https://github.com/kazssym/stablelm-study-2).
|
10 |
It was exported from [stabilityai/stablelm-3b-4e1t](https://huggingface.co/stabilityai/stablelm-3b-4e1t) to ONNX with a [modified version](https://github.com/huggingface/optimum/pull/1719) of Hugging Face Optimum.
|
11 |
It is quite possible to have problems.
|
12 |
|
13 |
This model does not include a tokenizer.
|
14 |
|
15 |
+
# Exporting
|
16 |
|
17 |
+
This model was exported with [our test scripts](https://github.com/kazssym/stablelm-study-2).
|
|
|
|
|
|
|
18 |
|
19 |
+
The source model requires [Transformers](https://github.com/huggingface/transformers) 4.38 or later to export.
|
20 |
|
|
|
|
|
|
|
|
|
|
|
|