mgoin commited on
Commit
f9ac629
·
verified ·
1 Parent(s): 31c5a14

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -6
README.md CHANGED
@@ -9,9 +9,12 @@ pinned: false
9
 
10
  # Software-Delivered AI Inference
11
 
12
- Neural Magic helps developers in accelerating deep learning performance using automated model sparsification technologies and inference engines.
13
- Download our sparsity-aware inference engines and open source tools for fast model inference.
14
- * [nm-vllm](https://github.com/neuralmagic/nm-vllm): A high-throughput and memory-efficient inference engine for LLMs, incorporating the latest LLM optimizations like quantization and sparsity
15
- * [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application
16
- * [SparseML](https://github.com/neuralmagic/sparseml): Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
17
- * [SparseZoo](https://sparsezoo.neuralmagic.com/): Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
 
 
 
 
9
 
10
  # Software-Delivered AI Inference
11
 
12
+ Neural Magic helps developers in accelerating deep learning performance using automated model compression technologies and inference engines.
13
+ Download our compression-aware inference engines and open source tools for fast model inference.
14
+ * [nm-vllm](https://neuralmagic.com/nm-vllm/): A high-throughput and memory-efficient inference engine for LLMs, our supported enterprise distribution of [vLLM](https://github.com/vllm-project/vllm).
15
+ * [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering accelerated performance on CPUs and APIs to integrate ML into your application
16
+ * [LLM Compressor](https://github.com/vllm-project/llm-compressor/): Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
17
+
18
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60466e4b4f40b01b66151416/2IDqpxbtCtw_ilOZbTSj0.png)
19
+
20
+ In this profile we provide accurate model checkpoints compressed with SOTA methods ready to run in vLLM such as W4A16, W8A16, W8A8 (int8 and fp8), and many more!