File size: 956 Bytes
a895505
0a98a5f
2f5f670
e68a6b7
2f5f670
0a98a5f
a895505
0a98a5f
e91e11a
0a98a5f
720b49d
9ed0e00
d698516
eefa6fc
 
2f5f670
b4ec441
2f5f670
eefa6fc
2f5f670
 
 
6a9e65e
eefa6fc
2f5f670
 
9ed0e00
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
  CGRE is a generation-based relation extraction model

  ·a SOTA chinese end-to-end relation extraction model,using bart as backbone.

  ·using the Distant-supervised data from cndbpedia,pretrained from the checkpoint of fnlp/bart-base-chinese.
  
  ·can perform SOTA in many chinese relation extraction dataset,such as DuIE~1.0,DuIE~2.0,HacRED,etc.
  
  ·easy to use,just like normal generation task.
  
  ·input is sentence,and output is linearlize triples,such as input:姚明是一名NBA篮球运动员 output:[subj]姚明[obj]NBA[rel]公司[obj]篮球运动员[rel]职业


using model:

from transformers import BertTokenizer, BartForConditionalGeneration

model_name = 'fnlp/bart-base-chinese'

tokenizer_kwargs = {
    "use_fast": True,
    "additional_special_tokens": ['<rel>', '<obj>', '<subj>'],
} # if cannot see tokens in model card please open readme file

tokenizer = BertTokenizer.from_pretrained(model_name, **tokenizer_kwargs)