mgoin commited on
Commit
a2c8f17
·
verified ·
1 Parent(s): 243c5b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,13 +7,13 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- # Software-Delivered AI Inference
11
 
12
  Neural Magic helps developers in accelerating deep learning performance using automated model compression technologies and inference engines.
13
  Download our compression-aware inference engines and open source tools for fast model inference.
14
  * [nm-vllm](https://neuralmagic.com/nm-vllm/): A high-throughput and memory-efficient inference engine for LLMs, our supported enterprise distribution of [vLLM](https://github.com/vllm-project/vllm).
15
- * [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering accelerated performance on CPUs and APIs to integrate ML into your application
16
  * [llm-compressor](https://github.com/vllm-project/llm-compressor/): HF-compatible library for applying various quantization and sparsity algorithms to llms for optimized deployment with vLLM
 
17
 
18
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60466e4b4f40b01b66151416/2IDqpxbtCtw_ilOZbTSj0.png)
19
 
 
7
  pinned: false
8
  ---
9
 
10
+ # The Future of AI is Open
11
 
12
  Neural Magic helps developers in accelerating deep learning performance using automated model compression technologies and inference engines.
13
  Download our compression-aware inference engines and open source tools for fast model inference.
14
  * [nm-vllm](https://neuralmagic.com/nm-vllm/): A high-throughput and memory-efficient inference engine for LLMs, our supported enterprise distribution of [vLLM](https://github.com/vllm-project/vllm).
 
15
  * [llm-compressor](https://github.com/vllm-project/llm-compressor/): HF-compatible library for applying various quantization and sparsity algorithms to llms for optimized deployment with vLLM
16
+ * [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering accelerated performance on CPUs and APIs to integrate ML into your application
17
 
18
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60466e4b4f40b01b66151416/2IDqpxbtCtw_ilOZbTSj0.png)
19