robertgshaw2 commited on
Commit
31c5a14
·
verified ·
1 Parent(s): 35668a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -3
README.md CHANGED
@@ -15,6 +15,3 @@ Download our sparsity-aware inference engines and open source tools for fast mod
15
  * [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application
16
  * [SparseML](https://github.com/neuralmagic/sparseml): Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
17
  * [SparseZoo](https://sparsezoo.neuralmagic.com/): Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
18
-
19
-
20
- **✨NEW✨ DeepSparse LLMs**: We are excited to announce our paper on Sparse Fine-Tuning of LLMs, starting with MPT and Llama 2. Check out the [paper](https://arxiv.org/abs/2310.06927), [models](https://sparsezoo.neuralmagic.com/?datasets=gsm8k&ungrouped=true), and [usage](https://research.neuralmagic.com/mpt-sparse-finetuning).
 
15
  * [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application
16
  * [SparseML](https://github.com/neuralmagic/sparseml): Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
17
  * [SparseZoo](https://sparsezoo.neuralmagic.com/): Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes