File size: 7,902 Bytes
279b15f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f94df10
279b15f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
---
tasks:
- Relation Extraction
widgets:
  - examples:
      - name: 1
        title: Message-Topic(e1,e2)
        inputs:
          - name: token
            data: ["the", "most", "common", "audits", "were", "about", "waste", "and", "recycling", "."]
          - name: h
            data: 
              - name: audits
                pos: [3, 4]
          - name: t
            data: 
              - name: waste
                pos: [6, 7]
                 
      - name: 2
        title: Product-Producer(e2,e1)
        inputs:
          - name: token
            data: ["the", "ombudsman", "'s", "report", "concluded", "that", "``", "a", "large", "part", "of", "the", "package", "was", "not", "provided", "''", "."]
          - name: h
            data: 
              - name: ombudsman
                pos: [1, 2]
          - name: t
            data: 
              - name: report
                pos: [3, 4]
      - name: 3
        title: Instrument-Agency(e2,e1)
        inputs:
          - name: token
            data: ["many", "professional", "cartomancers", "use", "a", "regular", "deck", "of", "playing", "cards", "for", "divination", "."]
          - name: h
            data: 
              - name: cartomancers
                pos: [2, 3]
          - name: t
            data: 
              - name: cards
                pos: [9, 10]
        
      - name: 4
        title: Entity-Destination(e1,e2)
        inputs:
          - name: token
            data: ["nasa", "kepler", "mission", "sends", "names", "into", "space", "."]
          - name: h
            data: 
              - name: oil
                pos: [4, 5]
          - name: t
            data: 
              - name: ocean
                pos: [7, 8]
        
      - name: 5
        title: Cause-Effect(e2,e1)
        inputs:
          - name: token
            data: ["sorace", "was", "unaware", "that", "her", "anger", "was", "caused", "by", "the", "abuse", "."]
          - name: h
            data: 
              - name: anger
                pos: [5, 6]
          - name: t
            data: 
              - name: abuse
                pos: [10, 11]
     
      - name: 6
        title: Component-Whole(e1,e2)
        inputs:
          - name: token
            data: ["the", "castle", "was", "inside", "a", "museum", "."]
          - name: h
            data: 
              - name: castle
                pos: [1, 2]
          - name: t
            data: 
              - name: museum
                pos: [5, 6]
   
domain:
- nlp
frameworks:
- pytorch
backbone:
- BERT large 
metrics:
- accuracy
license: apache-2.0
language:
- ch

---

# KnowPrompt:Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction
KnowPrompt is used for relational extraction tasks, injecting  latent knowledge contained in relation labels into prompt construction with learnable virtual template words and answer words , and synergistically optimize their representation with structured constraints.

## Model description

 We take the first step to inject latent knowledge contained in relation labels into prompt construction,the knowledge extraction is then implemented with a Prompt-tuning model。The implementation is as follows:virtual template words around entities initialized using aggregate entity embeddings are used as learnable virtual template words to inject entity  knowledge; Meanwhile, we leverage label to compute average embeddings as virtual answers words to inject relationship knowledge. In this structure, entities and relations are mutually constrained and virtual template and answer words should be contextually relevant, so we introduce synergistic optimization to correct virtual template and answer words.
![image.png](./model.png)

## Intended uses & limitations

This model is used for relationship extraction tasks, and the extracted information can be used for more downstream NLP tasks, such as: information retrieval, conversation generation and Q&A. Please refer to the code example for details on how to use it.

The relationship labels in the model training data are limited and can only generalize the relationships in the real world to a certain extent.

## Training data

We adopt SemEval as the dataset

| **Dataset** | **# Train.** | **# Val.** | **# Test.** | **# Rel.** |
| ----------- | ------------ | ---------- | ----------- | ---------- |
| SemEval     | 6,507        | 1,493      | 2,717       | 19         |

## Training procedure

### Training

The training is divided into two phases, and the first phase performs collaborative optimization of virtual template words and answer words
$$
\mathcal{J}=\mathcal{J}_{[\text {MASK }]}+\lambda \mathcal{J}_{\text {structured }},
$$

 $\lambda$is the hyperparameter for weighing the two loss functions;The second stage optimizes all parameters with a smaller learning rate based on the optimized virtual template words and answer words, using only the loss function $\mathcal{J}_{texttt{[MASK]}}$to finetune the parameters for the language model.The hyperparameters are different for different datasets, as shown in the script file in the source code.Taking SemEval as an example, the hyperparameters are set as follows:
```
max_epochs=10
max_sequence_length=256
batch_size=16
learning_rate=3e-5
batch_size=16
t_lambda=0.001
```
### Data Evaluation and Results
The results of the comparison with other models in standard settings are shown in the following table.
| **Methods** | **Precision** |
| ----------- | ------------- |
| Fine-tuning | 87.6          |
| KnowBERT    | 89.1          |
| MTB         | 89.5          |
| PTR         | 89.9          |
| KnowPrompt  | 90.2 (+0.3)   |
 In low-resource settings,we performed the 8-, 16-, and 32-experiments.K instances of each class are sampled from the initial training and validation sets to form the training and validation sets for the FEW-shot. The results are as follows:
| Split | **Methods** | **Precision** |
| ----- | ----------- | ------------- |
| k=8   | Fine-tuning | 41.3          |
|       | GDPNet      | 42.0          |
|       | PTR         | 70.5          |
|       | KnowPrompt  | 74.3 (+33.0)  |
| k=16  | Fine-tuning | 65.2          |
|       | GDPNet      | 67.5          |
|       | PTR         | 81.3          |
|       | KnowPrompt  | 82.9 (+17.7)  |
| k=32  | Fine-tuning | 80.1          |
|       | GDPNet      | 81.2          |
|       | PTR         | 84.2          |
|       | KnowPrompt  | 84.8 (+4.7)   |
As 𝐾 decreases from 32 to 8, the improvement in our KnowPrompt over the other three methods increases gradually.
#### BibTeX entry and citation info
```
@inproceedings{DBLP:conf/www/ChenZXDYTHSC22,
author    = {Xiang Chen and
               Ningyu Zhang and
               Xin Xie and
               Shumin Deng and
               Yunzhi Yao and
               Chuanqi Tan and
               Fei Huang and
               Luo Si and
               Huajun Chen},
  editor    = {Fr{\'{e}}d{\'{e}}rique Laforest and
               Rapha{\"{e}}l Troncy and
               Elena Simperl and
               Deepak Agarwal and
               Aristides Gionis and
               Ivan Herman and
               Lionel M{\'{e}}dini},
  title     = {KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization
               for Relation Extraction},
  booktitle = {{WWW} '22: The {ACM} Web Conference 2022, Virtual Event, Lyon, France,
               April 25 - 29, 2022},
  pages     = {2778--2788},
  publisher = {{ACM}},
  year      = {2022},
  url       = {https://doi.org/10.1145/3485447.3511998},
  doi       = {10.1145/3485447.3511998},
  timestamp = {Tue, 26 Apr 2022 16:02:09 +0200},
  biburl    = {https://dblp.org/rec/conf/www/ChenZXDYTHSC22.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
```bash
 git clone https://www.modelscope.cn/jeno11/knowprompt_demo.git
```