mgoin commited on
Commit
092b9b4
·
verified ·
1 Parent(s): 7101434

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -9,11 +9,12 @@ pinned: false
9
 
10
  # The Future of AI is Open
11
 
 
 
12
  [Neural Magic](https://neuralmagic.com/) helps developers in accelerating deep learning performance using automated model compression technologies and inference engines.
13
  Download our compression-aware inference engines and open source tools for fast model inference.
14
- * [nm-vllm](https://neuralmagic.com/nm-vllm/): Enterprise-ready inferencing system based on the open-source library, vLLM, for at-scale operationalization of performant open-source LLMs
15
  * [LLM Compressor](https://github.com/vllm-project/llm-compressor/): HF-native library for applying quantization and sparsity algorithms to llms for optimized deployment with vLLM
16
- * [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering accelerated performance on CPUs and APIs to integrate ML into your application
17
 
18
  ![NM Workflow](https://cdn-uploads.huggingface.co/production/uploads/60466e4b4f40b01b66151416/QacT1zAnoidTKqRTY4NxH.png)
19
 
 
9
 
10
  # The Future of AI is Open
11
 
12
+ **If you are looking for compressed models to run with vLLM, they have been moved to the [RedHatAI](https://huggingface.co/RedHatAI) organization. We are looking forward to continue publishing optimized models for open source use!**
13
+
14
  [Neural Magic](https://neuralmagic.com/) helps developers in accelerating deep learning performance using automated model compression technologies and inference engines.
15
  Download our compression-aware inference engines and open source tools for fast model inference.
16
+ * [vLLM](https://github.com/vllm-project/vllm/): A high-throughput and memory-efficient inference engine for at-scale deployment of performant open-source LLMs
17
  * [LLM Compressor](https://github.com/vllm-project/llm-compressor/): HF-native library for applying quantization and sparsity algorithms to llms for optimized deployment with vLLM
 
18
 
19
  ![NM Workflow](https://cdn-uploads.huggingface.co/production/uploads/60466e4b4f40b01b66151416/QacT1zAnoidTKqRTY4NxH.png)
20