mgoin commited on
Commit
c9b8405
·
verified ·
1 Parent(s): 15cda38

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -15,6 +15,6 @@ Download our compression-aware inference engines and open source tools for fast
15
  * [llm-compressor](https://github.com/vllm-project/llm-compressor/): HF-native library for applying quantization and sparsity algorithms to llms for optimized deployment with vLLM
16
  * [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering accelerated performance on CPUs and APIs to integrate ML into your application
17
 
18
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60466e4b4f40b01b66151416/2IDqpxbtCtw_ilOZbTSj0.png)
19
 
20
  In this profile we provide accurate model checkpoints compressed with SOTA methods ready to run in vLLM such as W4A16, W8A16, W8A8 (int8 and fp8), and many more! If you would like help quantizing a model or have a request for us to add a checkpoint, please open an issue in https://github.com/vllm-project/llm-compressor.
 
15
  * [llm-compressor](https://github.com/vllm-project/llm-compressor/): HF-native library for applying quantization and sparsity algorithms to llms for optimized deployment with vLLM
16
  * [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering accelerated performance on CPUs and APIs to integrate ML into your application
17
 
18
+ ![NM Workflow](https://cdn-uploads.huggingface.co/production/uploads/60466e4b4f40b01b66151416/oFtTSqKjDLwd095gtYHlc.png)
19
 
20
  In this profile we provide accurate model checkpoints compressed with SOTA methods ready to run in vLLM such as W4A16, W8A16, W8A8 (int8 and fp8), and many more! If you would like help quantizing a model or have a request for us to add a checkpoint, please open an issue in https://github.com/vllm-project/llm-compressor.