Transformers
English
Inference Endpoints
dpetrak commited on
Commit
32b9e9b
1 Parent(s): 85a5f93

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - meta-llama/Llama-3.1-8B-Instruct
7
+ ---
8
+
9
+ # Llama 3.1 -- Finetuned using FEDI v 1.2
10
+
11
+ This is a 8-bit quantized LLama 3.1 gguf model trained on the new version of the FEDI dataset. It covers the following domains:
12
+
13
+ - Parcel Choice: In parcel choice, the system's task is to help the user choose the right shipping box and delivery option for their needs (given the weight of the items to be sent and the destination).
14
+ - Recharge Phone: In recharge phone, the task is to top up the user's prepaid SIM card.
15
+ - Building Access: In building access, the system acts as a receptionist and is responsible for access control.
16
+ - Question Answering: In question answering, the system runs the customer support for an insurance and postal service company.
17
+ - Parcel Shipping: In parcel shipping, the system helps the user to choose the right shipping box and shipping product.
18
+
19
+ ## Prompt Format
20
+
21
+ The model was trained in intent prediction, slot prediction and response generation. In the following, we provide examples for the prompts:
22
+
23
+ ### Intent Prediction
24
+
25
+ ```json
26
+ Language models are trained to understand and respond to human language. They interpret user queries and requests and generate informative and engaging responses that are tailored to the respective context and task. To do this, they have to interpret the task from a user utterance, extract the task-related attributes mentioned in the user utterance and dialogue history and take all this information into account to generate a response in a helpful and friendly manner. You are such a language model.
27
+
28
+ ###Instruction
29
+ Given is the following dialogue between a user and a language model (system):
30
+
31
+ {{history}}
32
+
33
+ Which of the following tasks is addressed by the last user utterance?
34
+
35
+ {{intents}}
36
+
37
+ Return the task name in a parsable JSON object. Here is an example:
38
+ {"result": {"intent": "parcel choice"}}
39
+
40
+ Return just one task name and don't include any additional notes or explanations.
41
+
42
+ ###RESPONSE
43
+ ```
44
+
45
+ ### Slot Prediction
46
+
47
+ ```json
48
+ Language models are trained to understand and respond to human language. They interpret user queries and requests and generate informative and engaging responses that are tailored to the respective context and task. To do this, they have to interpret the task from a user utterance, extract the task-related attributes mentioned in the user utterance and dialogue history and take all this information into account to generate a response in a helpful and friendly manner. You are such a language model.
49
+
50
+ ###Instruction
51
+ Given is the following dialogue between a user and a language model (system):
52
+ {{history}}
53
+
54
+ The language model (system) is used as a virtual agent in the task of {{intent}}. {{task}}
55
+
56
+ Attributes:
57
+ {{attributes}}
58
+
59
+ Extract the attributes according to the task description from the dialogue above. Only copy values from the dialogue above. Return the results in json format. Here is an example:
60
+ {{example}}
61
+
62
+ ###RESPONSE
63
+ ```
64
+
65
+ ### Response Generation
66
+
67
+ ```json
68
+ Language models are trained to understand and respond to human language. They interpret user queries and requests and generate informative and engaging responses that are tailored to the respective context and task. To do this, they have to interpret the task from a user utterance, extract the task-related attributes mentioned in the user utterance and dialogue history and take all this information into account to generate a response in a helpful and friendly manner. You are such a language model.
69
+
70
+ ###Instruction
71
+ Given is the following dialogue between a user and a language model (system):
72
+
73
+ {{history}}
74
+
75
+ The dialogue follows the following task description:
76
+
77
+ {{task}}
78
+
79
+ Attributes:
80
+ {{attributes}}
81
+
82
+ Already known attribute values:
83
+ {{slots}}
84
+
85
+ Missing attribute values:
86
+ {{missing_slots}}
87
+
88
+ Act as a member of the staff and generate the next system utterance for the dialogue above in such a way that it helps to clarify the missing attribute values. Your name is {{avatar_name}}. Your gender is {{gender}}. Also consider the user emotion (the user feels {{emotion}}) and the following knowledge in your response:
89
+ {{knowledge}}
90
+
91
+ Return your response in json format. Here is an example:
92
+ {"result": {"system": "Of course I can help you with that. Can you please provide me with your name and the name of your host?"}}
93
+
94
+ ###RESPONSE
95
+ ```