Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -15,6 +15,3 @@ Download our sparsity-aware inference engines and open source tools for fast mod
|
|
15 |
* [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application
|
16 |
* [SparseML](https://github.com/neuralmagic/sparseml): Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
|
17 |
* [SparseZoo](https://sparsezoo.neuralmagic.com/): Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
|
18 |
-
|
19 |
-
|
20 |
-
**✨NEW✨ DeepSparse LLMs**: We are excited to announce our paper on Sparse Fine-Tuning of LLMs, starting with MPT and Llama 2. Check out the [paper](https://arxiv.org/abs/2310.06927), [models](https://sparsezoo.neuralmagic.com/?datasets=gsm8k&ungrouped=true), and [usage](https://research.neuralmagic.com/mpt-sparse-finetuning).
|
|
|
15 |
* [DeepSparse](https://github.com/neuralmagic/deepsparse): Inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application
|
16 |
* [SparseML](https://github.com/neuralmagic/sparseml): Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
|
17 |
* [SparseZoo](https://sparsezoo.neuralmagic.com/): Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes
|
|
|
|
|
|