File size: 1,913 Bytes
7088f7e
42176fd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
[hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) first pre-trained on CMNLI and OCNLI and then fine-tuned on the [CDConv dataset](https://github.com/thu-coai/cdconv). It supports 2-class classification for 2-turn dialogue contradiction detection. Usage example:

```python
import torch
from transformers import AutoTokenizer
from transformers.models.bert.modeling_bert import BertForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained('chujiezheng/roberta-base-cdconv')
model = BertForSequenceClassification.from_pretrained('chujiezheng/roberta-base-cdconv')
model.eval()

turn1 = [
    "嗯嗯,你喜欢钓鱼吗?", # user
    "喜欢啊,钓鱼很好玩的", # bot
]
turn2 = [
    "你喜欢钓鱼吗?", # user
    "不喜欢,我喜欢看别人钓鱼", # bot, we want to identify whether this utterance makes a contradiction
] # turn1 and turn2 are not required to be two consecutive turns
text1 = "[SEP]".join(turn1 + turn2[:1])
text2 = turn2[1]

model_input = tokenizer(text1, text2, return_tensors='pt', return_token_type_ids=True, return_attention_mask=True)
model_output = model(**model_input, return_dict=False)
prediction = torch.argmax(model_output[0].cpu(), dim=-1)[0].item()
print(prediction) # 0 for non-contradiction, 1 for contradiction
```

This fine-tuned model obtains 75.7 accuracy and 72.3 macro-F1 on the test set.

Please kindly cite the [original paper](https://arxiv.org/abs/2210.08511) if you use this model.

```bib
@inproceedings{zheng-etal-2022-cdconv,
  title={Towards Emotional Support Dialog Systems},
  author={Zheng, Chujie  and 
    Zhou, Jinfeng  and 
    Zheng, Yinhe  and 
    Peng, Libiao  and 
    Guo, Zhen  and 
    Wu, Wenquan  and 
    Niu, Zhengyu  and 
    Wu, Hua  and 
    Huang, Minlie},
  booktitle={Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing},
  year={2022}
}
```