AndrewZeng
commited on
Commit
·
293eb5f
1
Parent(s):
b1768e1
update readme
Browse files
README.md
CHANGED
@@ -1,3 +1,26 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
|
5 |
+
# FutureTOD: Teaching Future Knowledge to Pre-trained Language Model for Task-Oriented Dialogue
|
6 |
+
|
7 |
+
We present our dialogue-pertaining model, FutureTOD, which distills future knowledge into the representation of the previous dialogue context using a self-training framework. Extensive experiments on diverse downstream dialogue tasks demonstrate the effectiveness of our model, especially its generalization, robustness, and ability to learn discriminative dialogue representations.
|
8 |
+
|
9 |
+
[This paper](https://arxiv.org/abs/2306.10315) has been accepted at the ACL 2023 Main Conference.
|
10 |
+
|
11 |
+
## Usage
|
12 |
+
|
13 |
+
We release our futuretod-base-v1.0 model here. You can use this model for downstream TOD tasks follow instructions in [FutureTOD](https://github.com/Zeng-WH/FutureTOD).
|
14 |
+
|
15 |
+
## Quotation
|
16 |
+
|
17 |
+
If you find our work helpful, please consider quoting the following papers.
|
18 |
+
|
19 |
+
```
|
20 |
+
@article{zeng2023futuretod,
|
21 |
+
title={FutureTOD: Teaching Future Knowledge to Pre-trained Language Model for Task-Oriented Dialogue},
|
22 |
+
author={Zeng, Weihao and He, Keqing and Wang, Yejie and Zeng, Chen and Wang, Jingang and Xian, Yunsen and Xu, Weiran},
|
23 |
+
journal={arXiv preprint arXiv:2306.10315},
|
24 |
+
year={2023}
|
25 |
+
}
|
26 |
+
```
|