Daniel Ferreira commited on
Commit
25503ec
·
1 Parent(s): d60567a

add note about evaluate.py

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -35,8 +35,10 @@ print(pipe(['example text','exemple de texte','texto de ejemplo','testo di esemp
35
  The table below compares some statistics on running the original model, vs the original model with the [onnxruntime](https://onnxruntime.ai/), vs optimizing the model with onnxruntime.
36
 
37
 
38
- | model | Accuracy | Samples p/ second (CPU) | Samples p/ second (GPU) | GPU VRAM | Disk Space |
39
  |----------------|----------|-------------------------|-------------------------|----------|------------|
40
  | original | 92.1083 | 16 | 250 | 3GB | 1.1GB |
41
  | ort | 92.1067 | 19 | 340 | 4GB | 1.1GB |
42
  | optimized (O4) | 92.1031 | 14 | 650 | 2GB | 540MB |
 
 
 
35
  The table below compares some statistics on running the original model, vs the original model with the [onnxruntime](https://onnxruntime.ai/), vs optimizing the model with onnxruntime.
36
 
37
 
38
+ | model | Accuracy (%) | Samples p/ second (CPU) | Samples p/ second (GPU) | GPU VRAM | Disk Space |
39
  |----------------|----------|-------------------------|-------------------------|----------|------------|
40
  | original | 92.1083 | 16 | 250 | 3GB | 1.1GB |
41
  | ort | 92.1067 | 19 | 340 | 4GB | 1.1GB |
42
  | optimized (O4) | 92.1031 | 14 | 650 | 2GB | 540MB |
43
+
44
+ For details on how these numbers were reached, check out `evaluate.py` in this repo.