Papers
arxiv:2402.01714

TrICy: Trigger-guided Data-to-text Generation with Intent aware Attention-Copy

Published on Jan 25, 2024
Authors:
,
,
,

Abstract

<PRE_TAG>Data-to-text (D2T)</POST_TAG> generation is a crucial task in many natural language understanding (NLU) applications and forms the foundation of task-oriented dialog systems. In the context of conversational AI solutions that can work directly with local data on the user's device, architectures utilizing large pre-trained language models (PLMs) are impractical for on-device deployment due to a high memory footprint. To this end, we propose <PRE_TAG>TrICy</POST_TAG>, a novel lightweight framework for an enhanced D2T task that generates text sequences based on the intent in context and may further be guided by user-provided triggers. We leverage an attention-copy mechanism to predict out-of-vocabulary (OOV) words accurately. Performance analyses on <PRE_TAG>E2E NLG dataset</POST_TAG> (<PRE_TAG>BLEU</POST_TAG>: 66.43%, <PRE_TAG>ROUGE-L</POST_TAG>: 70.14%), <PRE_TAG>WebNLG dataset</POST_TAG> (<PRE_TAG>BLEU</POST_TAG>: Seen 64.08%, Unseen 52.35%), and our Custom dataset related to text messaging applications, showcase our architecture's effectiveness. Moreover, we show that by leveraging an optional trigger input, data-to-text generation quality increases significantly and achieves the new SOTA score of 69.29% <PRE_TAG>BLEU</POST_TAG> for E2E NLG. Furthermore, our analyses show that <PRE_TAG>TrICy</POST_TAG> achieves at least 24% and 3% improvement in <PRE_TAG>BLEU</POST_TAG> and METEOR respectively over LLMs like GPT-3, ChatGPT, and Llama 2. We also demonstrate that in some scenarios, performance improvement due to triggers is observed even when they are absent in training.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2402.01714 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2402.01714 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2402.01714 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.