mgoin commited on
Commit
35668a4
·
verified ·
1 Parent(s): 31afc66

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -11,7 +11,7 @@ pinned: false
11
 
12
  Neural Magic helps developers in accelerating deep learning performance using automated model sparsification technologies and inference engines.
13
  Download our sparsity-aware inference engines and open source tools for fast model inference.
14
- * [NM-vLLM](https://github.com/neuralmagic/nm-vllm): A high-throughput and memory-efficient inference engine for LLMs, incorporating the latest LLM optimizations like quantization and sparsity
15
  * [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application
16
  * [SparseML](https://github.com/neuralmagic/sparseml): Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
17
  * [SparseZoo](https://sparsezoo.neuralmagic.com/): Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
 
11
 
12
  Neural Magic helps developers in accelerating deep learning performance using automated model sparsification technologies and inference engines.
13
  Download our sparsity-aware inference engines and open source tools for fast model inference.
14
+ * [nm-vllm](https://github.com/neuralmagic/nm-vllm): A high-throughput and memory-efficient inference engine for LLMs, incorporating the latest LLM optimizations like quantization and sparsity
15
  * [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application
16
  * [SparseML](https://github.com/neuralmagic/sparseml): Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
17
  * [SparseZoo](https://sparsezoo.neuralmagic.com/): Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes