Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,7 @@ license_link: https://ai.google.dev/gemma/terms
|
|
19 |
|
20 |
# Gemma Model Card
|
21 |
This model card is copied from the original [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) with edits to the code snippets on how to run this auto-gptq quantized version of the model.
|
22 |
-
This auto-gptq quantized version of the model had only been tested to work on cuda GPU
|
23 |
|
24 |
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
|
25 |
|
|
|
19 |
|
20 |
# Gemma Model Card
|
21 |
This model card is copied from the original [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) with edits to the code snippets on how to run this auto-gptq quantized version of the model.
|
22 |
+
This auto-gptq quantized version of the model had only been tested to work on cuda GPU and utilise approximately 2.6GB of VRAM.
|
23 |
|
24 |
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
|
25 |
|