mgoin commited on
Commit
3e23907
Β·
1 Parent(s): dffbee6

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -14,7 +14,7 @@ Model ID: {MODEL_ID}
14
  πŸš€ **Experience the power of LLM mathematical reasoning** through [our Llama 2 sparse finetuned](https://arxiv.org/abs/2310.06927) on the [GSM8K dataset](https://huggingface.co/datasets/gsm8k).
15
  GSM8K, short for Grade School Math 8K, is a collection of 8.5K high-quality linguistically diverse grade school math word problems, designed to challenge question-answering systems with multi-step reasoning.
16
  Observe the model's performance in deciphering complex math questions and offering detailed step-by-step solutions.
17
- ## Accelerated Inferenced on CPUs
18
  The Llama 2 model runs purely on CPU courtesy of [sparse software execution by DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt).
19
  DeepSparse provides accelerated inference by taking advantage of the model's weight sparsity to deliver tokens fast!
20
 
 
14
  πŸš€ **Experience the power of LLM mathematical reasoning** through [our Llama 2 sparse finetuned](https://arxiv.org/abs/2310.06927) on the [GSM8K dataset](https://huggingface.co/datasets/gsm8k).
15
  GSM8K, short for Grade School Math 8K, is a collection of 8.5K high-quality linguistically diverse grade school math word problems, designed to challenge question-answering systems with multi-step reasoning.
16
  Observe the model's performance in deciphering complex math questions and offering detailed step-by-step solutions.
17
+ ## Accelerated Inference on CPUs
18
  The Llama 2 model runs purely on CPU courtesy of [sparse software execution by DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt).
19
  DeepSparse provides accelerated inference by taking advantage of the model's weight sparsity to deliver tokens fast!
20