Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ license: apache-2.0
|
|
8 |
|
9 |
**slim-sentiment** is part of the SLIM ("Structured Language Instruction Model") model series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
|
10 |
|
11 |
-
slim-sentiment has been fine-tuned for **sentiment analysis** function calls,
|
12 |
|
13 |
Each slim model has a corresponding 'tool' in a separate repository, e.g., [**'slim-sentiment-tool'**](www.huggingface.co/llmware/slim-sentiment-tool/), which a 4-bit quantized gguf version of the model that is intended to be used for inference.
|
14 |
|
@@ -47,6 +47,7 @@ All of the SLIM models use a novel prompt instruction structured as follows:
|
|
47 |
|
48 |
The fastest way to get started with BLING is through direct import in transformers:
|
49 |
|
|
|
50 |
import ast
|
51 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
52 |
|
@@ -87,10 +88,13 @@ The fastest way to get started with BLING is through direct import in transforme
|
|
87 |
except:
|
88 |
print("could not convert to json automatically - ", output_only)
|
89 |
|
90 |
-
|
|
|
91 |
## Using as Function Call in LLMWare
|
92 |
|
93 |
-
We envision the slim models deployed in a pipeline/workflow/templating framework that handles the prompt packaging more elegantly.
|
|
|
|
|
94 |
|
95 |
from llmware.models import ModelCatalog
|
96 |
slim_model = ModelCatalog().load_model("llmware/slim-sentiment")
|
|
|
8 |
|
9 |
**slim-sentiment** is part of the SLIM ("Structured Language Instruction Model") model series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
|
10 |
|
11 |
+
slim-sentiment has been fine-tuned for **sentiment analysis** function calls, generating output consisting of JSON dictionary corresponding to specified keys.
|
12 |
|
13 |
Each slim model has a corresponding 'tool' in a separate repository, e.g., [**'slim-sentiment-tool'**](www.huggingface.co/llmware/slim-sentiment-tool/), which a 4-bit quantized gguf version of the model that is intended to be used for inference.
|
14 |
|
|
|
47 |
|
48 |
The fastest way to get started with BLING is through direct import in transformers:
|
49 |
|
50 |
+
'''python
|
51 |
import ast
|
52 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
53 |
|
|
|
88 |
except:
|
89 |
print("could not convert to json automatically - ", output_only)
|
90 |
|
91 |
+
'''
|
92 |
+
|
93 |
## Using as Function Call in LLMWare
|
94 |
|
95 |
+
We envision the slim models deployed in a pipeline/workflow/templating framework that handles the prompt packaging more elegantly.
|
96 |
+
|
97 |
+
Check out llmware for one such implementation:
|
98 |
|
99 |
from llmware.models import ModelCatalog
|
100 |
slim_model = ModelCatalog().load_model("llmware/slim-sentiment")
|