doberst commited on
Commit
44fa5e7
·
verified ·
1 Parent(s): 5ae662e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -14,23 +14,26 @@ slim-sentiment has been fine-tuned for **sentiment analysis** function calls, ge
14
  `{"sentiment": ["positive"]}`
15
 
16
 
17
- SLIM models aspire re-imagine traditional 'hard-coded' classifiers through the use of function calls, to provide a flexible natural language generative model that can be used as decision gates and processing steps in a complex LLM-based automation workflow.
18
 
19
  Each slim model has a 'quantized tool' version, e.g., [**'slim-sentiment-tool'**](https://huggingface.co/llmware/slim-sentiment-tool).
20
 
21
 
22
  ## Prompt format:
23
 
 
 
 
24
  `"<human> " + {text} + "\n" + `
25
 
26
- `"<{function}> " + {keys} + "</{function}>"`
27
 
28
  `+ "/n<bot>:" `
29
 
30
 
31
 
32
  <details>
33
- <summary><b> Transformers Script </b> </summary>
34
 
35
  model = AutoModelForCausalLM.from_pretrained("llmware/slim-sentiment")
36
  tokenizer = AutoTokenizer.from_pretrained("llmware/slim-sentiment")
@@ -72,7 +75,7 @@ Each slim model has a 'quantized tool' version, e.g., [**'slim-sentiment-tool'*
72
 
73
 
74
 
75
- <summary><b>Using as Function Call in LLMWare</b></summary>
76
 
77
  from llmware.models import ModelCatalog
78
  slim_model = ModelCatalog().load_model("llmware/slim-sentiment")
 
14
  `{"sentiment": ["positive"]}`
15
 
16
 
17
+ SLIM models are designed to provide a flexible natural language generative model that can be used for decision gates and processing steps in a complex LLM-based automation workflow.
18
 
19
  Each slim model has a 'quantized tool' version, e.g., [**'slim-sentiment-tool'**](https://huggingface.co/llmware/slim-sentiment-tool).
20
 
21
 
22
  ## Prompt format:
23
 
24
+ `function = "classify"`
25
+ `params = "sentiment"`
26
+
27
  `"<human> " + {text} + "\n" + `
28
 
29
+ `"<{function}> " + {params} + "</{function}>"`
30
 
31
  `+ "/n<bot>:" `
32
 
33
 
34
 
35
  <details>
36
+ <summary>Transformers Script </summary>
37
 
38
  model = AutoModelForCausalLM.from_pretrained("llmware/slim-sentiment")
39
  tokenizer = AutoTokenizer.from_pretrained("llmware/slim-sentiment")
 
75
 
76
 
77
 
78
+ <summary>Using as Function Call in LLMWare</summary>
79
 
80
  from llmware.models import ModelCatalog
81
  slim_model = ModelCatalog().load_model("llmware/slim-sentiment")