File size: 2,799 Bytes
e96632f
9e4c66d
 
e96632f
0a3c027
2166a5e
0a3c027
 
 
5ae662e
1d4191b
e87cd3e
c1d6128
54d0fcd
8d1c670
 
2d443a3
8d1c670
16811d7
0a3c027
 
94f2c8e
 
44fa5e7
 
2291d24
890b0c6
16811d7
 
5c2bd2e
44fa5e7
0a3c027
5c2bd2e
 
0a3c027
5c2bd2e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94f2c8e
54c2b3e
94f2c8e
54c2b3e
085c075
 
54c2b3e
085c075
e87cd3e
16811d7
7d6b92f
16811d7
 
0a3c027
2c8b194
5ae662e
44fa5e7
0a3c027
2a2923d
 
 
0a3c027
2a2923d
 
8d1c670
 
2a2923d
0a3c027
 
085c075
0a3c027
085c075
0a3c027
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
license: apache-2.0  
inference: false  
---

# SLIM-SENTIMENT

<!-- Provide a quick summary of what the model is/does. -->

**slim-sentiment** is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") model series, consisting of small, specialized decoder-based models, fine-tuned for function-calling.  

slim-sentiment has been fine-tuned for **sentiment analysis** function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:  

&nbsp;&nbsp;&nbsp;&nbsp;`{"sentiment": ["positive"]}`


SLIM models are designed to generate structured outputs that can be used programmatically as part of a multi-step, multi-model LLM-based automation workflow.  

Each slim model has a 'quantized tool' version, e.g.,  [**'slim-sentiment-tool'**](https://huggingface.co/llmware/slim-sentiment-tool).  


## Prompt format:

`function = "classify"`  
`params = "sentiment"`  
`prompt = "<human> " + {text} + "\n" + `  
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp;`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`  


<details>
<summary>Transformers Script </summary>

    model = AutoModelForCausalLM.from_pretrained("llmware/slim-sentiment")
    tokenizer = AutoTokenizer.from_pretrained("llmware/slim-sentiment")

    function = "classify"
    params = "sentiment"

    text = "The stock market declined yesterday as investors worried increasingly about the slowing economy."  
    
    prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"

    inputs = tokenizer(prompt, return_tensors="pt")
    start_of_input = len(inputs.input_ids[0])

    outputs = model.generate(
        inputs.input_ids.to('cpu'),
        eos_token_id=tokenizer.eos_token_id,
        pad_token_id=tokenizer.eos_token_id,
        do_sample=True,
        temperature=0.3,
        max_new_tokens=100
    )

    output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)

    print("output only: ", output_only)  

    # here's the fun part
    try:
        output_only = ast.literal_eval(llm_string_output)
        print("success - converted to python dictionary automatically")
    except:
        print("fail - could not convert to python dictionary automatically - ", llm_string_output)
   
   </details>  
 
<details>  



    
<summary>Using as Function Call in LLMWare</summary>

    from llmware.models import ModelCatalog
    slim_model = ModelCatalog().load_model("llmware/slim-sentiment")
    response = slim_model.function_call(text,params=["sentiment"], function="classify")

    print("llmware - llm_response: ", response)

</details>  

    
## Model Card Contact

Darren Oberst & llmware team  

[Join us on Discord](https://discord.gg/MhZn5Nc39h)