Update README.md
Browse files
README.md
CHANGED
@@ -24,6 +24,15 @@ Made with exllamav2 0.2.7 with the default dataset. Granite3 models require 0.2.
|
|
24 |
Exl2 models can be used with TabbyAPI, Text-Generation-WebUI and some others.
|
25 |
Exl2 models require Nvidia RTX on Windows or Nvidia RTX or AMD ROCm GPUs on Linux.
|
26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
# Granite Guardian 3.1 2B
|
28 |
|
29 |
## Model Summary
|
|
|
24 |
Exl2 models can be used with TabbyAPI, Text-Generation-WebUI and some others.
|
25 |
Exl2 models require Nvidia RTX on Windows or Nvidia RTX or AMD ROCm GPUs on Linux.
|
26 |
|
27 |
+
Just in case if you downloaded the model and it answers only Yes/No, it's [intended behavior](https://github.com/ibm-granite/granite-guardian/tree/main#scope-of-use).
|
28 |
+
It's hardcoded in the model's Jinja2 template that can be viewed in tokenizer_config.json file.
|
29 |
+
By default in chat mode it evaluates if user's or assistant's message is harmful in general sense according to the model's risk definitions.
|
30 |
+
But it allows to choose a different predefined option, to set custom harm definitions or detect risks in RAG or function calling pipelines.
|
31 |
+
If you're using TabbyAPI you can either set risk_name or risk_definition via [template variables](https://github.com/theroyallab/tabbyAPI/wiki/04.-Chat-Completions#template-variables).
|
32 |
+
For example, you can switch to violence detection by adding: ``"template_vars": {"guardian_config": {"risk_name": "violence"}}`` to v1/chat/completions request.
|
33 |
+
For more information refer to Granite Guardian [documentation](https://github.com/ibm-granite/granite-guardian) and its Jinja2 template.
|
34 |
+
|
35 |
+
# Original model card
|
36 |
# Granite Guardian 3.1 2B
|
37 |
|
38 |
## Model Summary
|