entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
ouyang-etal-2022-impact
On the Impact of Noises in Crowd-Sourced Data for Speech Translation
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.9/
Ouyang, Siqi and Ye, Rong and Li, Lei
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
92--97
Training speech translation (ST) models requires large and high-quality datasets. MuST-C is one of the most widely used ST benchmark datasets. It contains around 400 hours of speech-transcript-translation data for each of the eight translation directions. This dataset passes several quality-control filters during creation. However, we find that MuST-C still suffers from three major quality issues: audiotext misalignment, inaccurate translation, and unnecessary speaker`s name. What are the impacts of these data quality issues for model development and evaluation? In this paper, we propose an automatic method to fix or filter the above quality issues, using English-German (En-De) translation as an example. Our experiments show that ST models perform better on clean test sets, and the rank of proposed models remains consistent across different test sets. Besides, simply removing misaligned data points from the training set does not lead to a better ST model.
null
null
10.18653/v1/2022.iwslt-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,465
inproceedings
anastasopoulos-etal-2022-findings
Findings of the {IWSLT} 2022 Evaluation Campaign
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.10/
Anastasopoulos, Antonios and Barrault, Lo{\"ic and Bentivogli, Luisa and Zanon Boito, Marcely and Bojar, Ond{\v{rej and Cattoni, Roldano and Currey, Anna and Dinu, Georgiana and Duh, Kevin and Elbayad, Maha and Emmanuel, Clara and Est{\`eve, Yannick and Federico, Marcello and Federmann, Christian and Gahbiche, Souhir and Gong, Hongyu and Grundkiewicz, Roman and Haddow, Barry and Hsu, Benjamin and Javorsk{\'y, D{\'avid and Kloudov{\'a, V{\u{era and Lakew, Surafel and Ma, Xutai and Mathur, Prashant and McNamee, Paul and Murray, Kenton and Nǎdejde, Maria and Nakamura, Satoshi and Negri, Matteo and Niehues, Jan and Niu, Xing and Ortega, John and Pino, Juan and Salesky, Elizabeth and Shi, Jiatong and Sperber, Matthias and St{\"uker, Sebastian and Sudoh, Katsuhito and Turchi, Marco and Virkar, Yogesh and Waibel, Alexander and Wang, Changhan and Watanabe, Shinji
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
98--157
The evaluation campaign of the 19th International Conference on Spoken Language Translation featured eight shared tasks: (i) Simultaneous speech translation, (ii) Offline speech translation, (iii) Speech to speech translation, (iv) Low-resource speech translation, (v) Multilingual speech translation, (vi) Dialect speech translation, (vii) Formality control for speech translation, (viii) Isometric speech translation. A total of 27 teams participated in at least one of the shared tasks. This paper details, for each shared task, the purpose of the task, the data that were released, the evaluation metrics that were applied, the submissions that were received and the results that were achieved.
null
null
10.18653/v1/2022.iwslt-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,466
inproceedings
zhang-ao-2022-yitrans
The {Y}i{T}rans Speech Translation System for {IWSLT} 2022 Offline Shared Task
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.11/
Zhang, Ziqiang and Ao, Junyi
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
158--168
This paper describes the submission of our end-to-end YiTrans speech translation system for the IWSLT 2022 offline task, which translates from English audio to German, Chinese, and Japanese. The YiTrans system is built on large-scale pre-trained encoder-decoder models. More specifically, we first design a multi-stage pre-training strategy to build a multi-modality model with a large amount of labeled and unlabeled data. We then fine-tune the corresponding components of the model for the downstream speech translation tasks. Moreover, we make various efforts to improve performance, such as data filtering, data augmentation, speech segmentation, model ensemble, and so on. Experimental results show that our YiTrans system obtains a significant improvement than the strong baseline on three translation directions, and it achieves +5.2 BLEU improvements over last year`s optimal end-to-end system on tst2021 English-German.
null
null
10.18653/v1/2022.iwslt-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,467
inproceedings
shanbhogue-etal-2022-amazon
{A}mazon {A}lexa {AI}`s System for {IWSLT} 2022 Offline Speech Translation Shared Task
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.12/
Shanbhogue, Akshaya and Xue, Ran and Chang, Ching-Yun and Campbell, Sarah
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
169--176
This paper describes Amazon Alexa AI`s submission to the IWSLT 2022 Offline Speech Translation Task. Our system is an end-to-end speech translation model that leverages pretrained models and cross modality transfer learning. We detail two improvements to the knowledge transfer schema. First, we implemented a new loss function that reduces knowledge gap between audio and text modalities in translation task effectively. Second, we investigate multiple finetuning strategies including sampling loss, language grouping and domain adaption. These strategies aims to bridge the gaps between speech and text translation tasks. We also implement a multi-stage segmentation and merging strategy that yields improvements on the unsegmented development datasets. Results show that the proposed loss function consistently improves BLEU scores on the development datasets for both English-German and multilingual models. Additionally, certain language pairs see BLEU score improvements with specific finetuning strategies.
null
null
10.18653/v1/2022.iwslt-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,468
inproceedings
gaido-etal-2022-efficient
Efficient yet Competitive Speech Translation: {FBK}@{IWSLT}2022
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.13/
Gaido, Marco and Papi, Sara and Fucci, Dennis and Fiameni, Giuseppe and Negri, Matteo and Turchi, Marco
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
177--189
The primary goal of this FBK`s systems submission to the IWSLT 2022 offline and simultaneous speech translation tasks is to reduce model training costs without sacrificing translation quality. As such, we first question the need of ASR pre-training, showing that it is not essential to achieve competitive results. Second, we focus on data filtering, showing that a simple method that looks at the ratio between source and target characters yields a quality improvement of 1 BLEU. Third, we compare different methods to reduce the detrimental effect of the audio segmentation mismatch between training data manually segmented at sentence level and inference data that is automatically segmented. Towards the same goal of training cost reduction, we participate in the simultaneous task with the same model trained for offline ST. The effectiveness of our lightweight training strategy is shown by the high score obtained on the MuST-C en-de corpus (26.7 BLEU) and is confirmed in high-resource data conditions by a 1.6 BLEU improvement on the IWSLT2020 test set over last year`s winning system.
null
null
10.18653/v1/2022.iwslt-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,469
inproceedings
pham-etal-2022-effective
Effective combination of pretrained models - {KIT}@{IWSLT}2022
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.14/
Pham, Ngoc-Quan and Nguyen, Tuan Nam and Nguyen, Thai-Binh and Liu, Danni and Mullov, Carlos and Niehues, Jan and Waibel, Alexander
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
190--197
Pretrained models in acoustic and textual modalities can potentially improve speech translation for both Cascade and End-to-end approaches. In this evaluation, we aim at empirically looking for the answer by using the wav2vec, mBART50 and DeltaLM models to improve text and speech translation models. The experiments showed that the presence of these models together with an advanced audio segmentation method results in an improvement over the previous end-to-end system by up to 7 BLEU points. More importantly, the experiments showed that given enough data and modeling capacity to overcome the training difficulty, we can outperform even very competitive Cascade systems. In our experiments, this gap can be as large as 2.0 BLEU points, the same gap that the Cascade often led over the years.
null
null
10.18653/v1/2022.iwslt-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,470
inproceedings
zhang-etal-2022-ustc
The {USTC}-{NELSLIP} Offline Speech Translation Systems for {IWSLT} 2022
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.15/
Zhang, Weitai and Ye, Zhongyi and Tang, Haitao and Li, Xiaoxi and Zhou, Xinyuan and Yang, Jing and Cui, Jianwei and Deng, Pan and Shi, Mohan and Song, Yifan and Liu, Dan and Liu, Junhua and Dai, Lirong
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
198--207
This paper describes USTC-NELSLIP`s submissions to the IWSLT 2022 Offline Speech Translation task, including speech translation of talks from English to German, English to Chinese and English to Japanese. We describe both cascaded architectures and end-to-end models which can directly translate source speech into target text. In the cascaded condition, we investigate the effectiveness of different model architectures with robust training and achieve 2.72 BLEU improvements over last year`s optimal system on MuST-C English-German test set. In the end-to-end condition, we build models based on Transformer and Conformer architectures, achieving 2.26 BLEU improvements over last year`s optimal end-to-end system. The end-to-end system has obtained promising results, but it is still lagging behind our cascaded models.
null
null
10.18653/v1/2022.iwslt-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,471
inproceedings
zhu-etal-2022-aisp
The {AISP}-{SJTU} Simultaneous Translation System for {IWSLT} 2022
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.16/
Zhu, Qinpei and Wu, Renshou and Liu, Guangfeng and Zhu, Xinyu and Chen, Xingyu and Zhou, Yang and Miao, Qingliang and Wang, Rui and Yu, Kai
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
208--215
This paper describes AISP-SJTU`s submissions for the IWSLT 2022 Simultaneous Translation task. We participate in the text-to-text and speech-to-text simultaneous translation from English to Mandarin Chinese. The training of the CAAT is improved by training across multiple values of right context window size, which achieves good online performance without setting a prior right context window size for training. For speech-to-text task, the best model we submitted achieves 25.87, 26.21, 26.45 BLEU in low, medium and high regimes on tst-COMMON, corresponding to 27.94, 28.31, 28.43 BLEU in text-to-text task.
null
null
10.18653/v1/2022.iwslt-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,472
inproceedings
guo-etal-2022-xiaomi
The Xiaomi Text-to-Text Simultaneous Speech Translation System for {IWSLT} 2022
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.17/
Guo, Bao and Liu, Mengge and Zhang, Wen and Chen, Hexuan and Mu, Chang and Li, Xiang and Cui, Jianwei and Wang, Bin and Guo, Yuhang
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
216--224
This system paper describes the Xiaomi Translation System for the IWSLT 2022 Simultaneous Speech Translation (noted as SST) shared task. We participate in the English-to-Mandarin Chinese Text-to-Text (noted as T2T) track. Our system is built based on the Transformer model with novel techniques borrowed from our recent research work. For the data filtering, language-model-based and rule-based methods are conducted to filter the data to obtain high-quality bilingual parallel corpora. We also strengthen our system with some dominating techniques related to data augmentation, such as knowledge distillation, tagged back-translation, and iterative back-translation. We also incorporate novel training techniques such as R-drop, deep model, and large batch training which have been shown to be beneficial to the naive Transformer model. In the SST scenario, several variations of $exttt{wait-k}$ strategies are explored. Furthermore, in terms of robustness, both data-based and model-based ways are used to reduce the sensitivity of our system to Automatic Speech Recognition (ASR) outputs. We finally design some inference algorithms and use the adaptive-ensemble method based on multiple model variants to further improve the performance of the system. Compared with strong baselines, fusing all techniques can improve our system by 2 extasciitilde3 BLEU scores under different latency regimes.
null
null
10.18653/v1/2022.iwslt-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,473
inproceedings
hrinchuk-etal-2022-nvidia
{NVIDIA} {N}e{M}o Offline Speech Translation Systems for {IWSLT} 2022
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.18/
Hrinchuk, Oleksii and Noroozi, Vahid and Khattar, Abhinav and Peganov, Anton and Subramanian, Sandeep and Majumdar, Somshubra and Kuchaiev, Oleksii
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
225--231
This paper provides an overview of NVIDIA NeMo`s speech translation systems for the IWSLT 2022 Offline Speech Translation Task. Our cascade system consists of 1) Conformer RNN-T automatic speech recognition model, 2) punctuation-capitalization model based on pre-trained T5 encoder, 3) ensemble of Transformer neural machine translation models fine-tuned on TED talks. Our end-to-end model has less parameters and consists of Conformer encoder and Transformer decoder. It relies on the cascade system by re-using its pre-trained ASR encoder and training on synthetic translations generated with the ensemble of NMT models. Our En-{\ensuremath{>}}De cascade and end-to-end systems achieve 29.7 and 26.2 BLEU on the 2020 test set correspondingly, both outperforming the previous year`s best of 26 BLEU.
null
null
10.18653/v1/2022.iwslt-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,474
inproceedings
zhang-etal-2022-niutranss
The {N}iu{T}rans`s Submission to the {IWSLT}22 {E}nglish-to-{C}hinese Offline Speech Translation Task
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.19/
Zhang, Yuhao and Huang, Canan and Xu, Chen and Liu, Xiaoqian and Li, Bei and Ma, Anxiang and Xiao, Tong and Zhu, Jingbo
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
232--238
This paper describes NiuTrans`s submission to the IWSLT22 English-to-Chinese (En-Zh) offline speech translation task. The end-to-end and bilingual system is built by constrained English and Chinese data and translates the English speech to Chinese text without intermediate transcription. Our speech translation models are composed of different pre-trained acoustic models and machine translation models by two kinds of adapters. We compared the effect of the standard speech feature (e.g. log Mel-filterbank) and the pre-training speech feature and try to make them interact. The final submission is an ensemble of three potential speech translation models. Our single best and ensemble model achieves 18.66 BLEU and 19.35 BLEU separately on MuST-C En-Zh tst-COMMON set.
null
null
10.18653/v1/2022.iwslt-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,475
inproceedings
wang-etal-2022-hw
The {HW}-{TSC}`s Offline Speech Translation System for {IWSLT} 2022 Evaluation
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.20/
Li, Yinglu and Wang, Minghan and Guo, Jiaxin and Qiao, Xiaosong and Wang, Yuxia and Wei, Daimeng and Su, Chang and Chen, Yimeng and Zhang, Min and Tao, Shimin and Yang, Hao and Qin, Ying
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
239--246
This paper describes the HW-TSC`s designation of the Offline Speech Translation System submitted for IWSLT 2022 Evaluation. We explored both cascade and end-to-end system on three language tracks (en-de, en-zh and en-ja), and we chose the cascade one as our primary submission. For the automatic speech recognition (ASR) model of cascade system, there are three ASR models including Conformer, S2T-Transformer and U2 trained on the mixture of five datasets. During inference, transcripts are generated with the help of domain controlled generation strategy. Context-aware reranking and ensemble based anti-interference strategy are proposed to produce better ASR outputs. For machine translation part, we pretrained three translation models on WMT21 dataset and fine-tuned them on in-domain corpora. Our cascade system shows competitive performance than the known offline systems in the industry and academia.
null
null
10.18653/v1/2022.iwslt-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,476
inproceedings
wang-etal-2022-hw-tscs
The {HW}-{TSC}`s Simultaneous Speech Translation System for {IWSLT} 2022 Evaluation
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.21/
Wang, Minghan and Guo, Jiaxin and Li, Yinglu and Qiao, Xiaosong and Wang, Yuxia and Li, Zongyao and Su, Chang and Chen, Yimeng and Zhang, Min and Tao, Shimin and Yang, Hao and Qin, Ying
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
247--254
This paper presents our work in the participation of IWSLT 2022 simultaneous speech translation evaluation. For the track of text-to-text (T2T), we participate in three language pairs and build wait-k based simultaneous MT (SimulMT) model for the task. The model was pretrained on WMT21 news corpora, and was further improved with in-domain fine-tuning and self-training. For the speech-to-text (S2T) track, we designed both cascade and end-to-end form in three language pairs. The cascade system is composed of a chunking-based streaming ASR model and the SimulMT model used in the T2T track. The end-to-end system is a simultaneous speech translation (SimulST) model based on wait-k strategy, which is directly trained on a synthetic corpus produced by translating all texts of ASR corpora into specific target language with an offline MT model. It also contains a heuristic sentence breaking strategy, preventing it from finishing the translation before the the end of the speech. We evaluate our systems on the MUST-C tst-COMMON dataset and show that the end-to-end system is competitive to the cascade one. Meanwhile, we also demonstrate that the SimulMT model can be efficiently optimized by these approaches, resulting in the improvements of 1-2 BLEU points.
null
null
10.18653/v1/2022.iwslt-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,477
inproceedings
iranzo-sanchez-etal-2022-mllp
{MLLP}-{VRAIN} {UPV} systems for the {IWSLT} 2022 Simultaneous Speech Translation and Speech-to-Speech Translation tasks
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.22/
Iranzo-S{\'a}nchez, Javier and Jorge Cano, Javier and P{\'e}rez-Gonz{\'a}lez-de-Martos, Alejandro and Gim{\'e}nez Pastor, Adri{\'a}n and Garc{\'e}s D{\'i}az-Mun{\'i}o, Gon{\c{c}}al V. and Baquero-Arnal, Pau and Silvestre-Cerd{\`a}, Joan Albert and Civera Saiz, Jorge and Sanchis, Albert and Juan, Alfons
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
255--264
This work describes the participation of the MLLP-VRAIN research group in the two shared tasks of the IWSLT 2022 conference: Simultaneous Speech Translation and Speech-to-Speech Translation. We present our streaming-ready ASR, MT and TTS systems for Speech Translation and Synthesis from English into German. Our submission combines these systems by means of a cascade approach paying special attention to data preparation and decoding for streaming inference.
null
null
10.18653/v1/2022.iwslt-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,478
inproceedings
tsiamas-etal-2022-pretrained
Pretrained Speech Encoders and Efficient Fine-tuning Methods for Speech Translation: {UPC} at {IWSLT} 2022
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.23/
Tsiamas, Ioannis and G{\'a}llego, Gerard I. and Escolano, Carlos and Fonollosa, Jos{\'e} and Costa-juss{\`a}, Marta R.
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
265--276
This paper describes the submissions of the UPC Machine Translation group to the IWSLT 2022 Offline Speech Translation and Speech-to-Speech Translation tracks. The offline task involves translating English speech to German, Japanese and Chinese text. Our Speech Translation systems are trained end-to-end and are based on large pretrained speech and text models. We use an efficient fine-tuning technique that trains only specific layers of our system, and explore the use of adapter modules for the non-trainable layers. We further investigate the suitability of different speech encoders (wav2vec 2.0, HuBERT) for our models and the impact of knowledge distillation from the Machine Translation model that we use for the decoder (mBART). For segmenting the IWSLT test sets we fine-tune a pretrained audio segmentation model and achieve improvements of 5 BLEU compared to the given segmentation. Our best single model uses HuBERT and parallel adapters and achieves 29.42 BLEU at English-German MuST-C tst-COMMON and 26.77 at IWSLT 2020 test. By ensembling many models, we further increase translation quality to 30.83 BLEU and 27.78 accordingly. Furthermore, our submission for English-Japanese achieves 15.85 and English-Chinese obtains 25.63 BLEU on the MuST-C tst-COMMON sets. Finally, we extend our system to perform English-German Speech-to-Speech Translation with a pretrained Text-to-Speech model.
null
null
10.18653/v1/2022.iwslt-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,479
inproceedings
polak-etal-2022-cuni
{CUNI}-{KIT} System for Simultaneous Speech Translation Task at {IWSLT} 2022
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.24/
Pol{\'a}k, Peter and Pham, Ngoc-Quan and Nguyen, Tuan Nam and Liu, Danni and Mullov, Carlos and Niehues, Jan and Bojar, Ond{\v{r}}ej and Waibel, Alexander
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
277--285
In this paper, we describe our submission to the Simultaneous Speech Translation at IWSLT 2022. We explore strategies to utilize an offline model in a simultaneous setting without the need to modify the original model. In our experiments, we show that our onlinization algorithm is almost on par with the offline setting while being 3x faster than offline in terms of latency on the test set. We also show that the onlinized offline model outperforms the best IWSLT2021 simultaneous system in medium and high latency regimes and is almost on par in the low latency regime. We make our system publicly available.
null
null
10.18653/v1/2022.iwslt-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,480
inproceedings
guo-etal-2022-hw
The {HW}-{TSC}`s Speech to Speech Translation System for {IWSLT} 2022 Evaluation
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.26/
Guo, Jiaxin and Li, Yinglu and Wang, Minghan and Qiao, Xiaosong and Wang, Yuxia and Shang, Hengchao and Su, Chang and Chen, Yimeng and Zhang, Min and Tao, Shimin and Yang, Hao and Qin, Ying
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
293--297
The paper presents the HW-TSC`s pipeline and results of Offline Speech to Speech Translation for IWSLT 2022. We design a cascade system consisted of an ASR model, machine translation model and TTS model to convert the speech from one language into another language(en-de). For the ASR part, we find that better performance can be obtained by ensembling multiple heterogeneous ASR models and performing reranking on beam candidates. And we find that the combination of context-aware reranking strategy and MT model fine-tuned on the in-domain dataset is helpful to improve the performance. Because it can mitigate the problem that the inconsistency in transcripts caused by the lack of context. Finally, we use VITS model provided officially to reproduce audio files from the translation hypothesis.
null
null
10.18653/v1/2022.iwslt-1.26
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,482
inproceedings
yan-etal-2022-cmus
{CMU}`s {IWSLT} 2022 Dialect Speech Translation System
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.27/
Yan, Brian and Fernandes, Patrick and Dalmia, Siddharth and Shi, Jiatong and Peng, Yifan and Berrebbi, Dan and Wang, Xinyi and Neubig, Graham and Watanabe, Shinji
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
298--307
This paper describes CMU`s submissions to the IWSLT 2022 dialect speech translation (ST) shared task for translating Tunisian-Arabic speech to English text. We use additional paired Modern Standard Arabic data (MSA) to directly improve the speech recognition (ASR) and machine translation (MT) components of our cascaded systems. We also augment the paired ASR data with pseudo translations via sequence-level knowledge distillation from an MT model and use these artificial triplet ST data to improve our end-to-end (E2E) systems. Our E2E models are based on the Multi-Decoder architecture with searchable hidden intermediates. We extend the Multi-Decoder by orienting the speech encoder towards the target language by applying ST supervision as hierarchical connectionist temporal classification (CTC) multi-task. During inference, we apply joint decoding of the ST CTC and ST autoregressive decoder branches of our modified Multi-Decoder. Finally, we apply ROVER voting, posterior combination, and minimum bayes-risk decoding with combined N-best lists to ensemble our various cascaded and E2E systems. Our best systems reached 20.8 and 19.5 BLEU on test2 (blind) and test1 respectively. Without any additional MSA data, we reached 20.4 and 19.2 on the same test sets.
null
null
10.18653/v1/2022.iwslt-1.27
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,483
inproceedings
zanon-boito-etal-2022-trac
{ON}-{TRAC} Consortium Systems for the {IWSLT} 2022 Dialect and Low-resource Speech Translation Tasks
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.28/
Zanon Boito, Marcely and Ortega, John and Riguidel, Hugo and Laurent, Antoine and Barrault, Lo{\"ic and Bougares, Fethi and Chaabani, Firas and Nguyen, Ha and Barbier, Florentin and Gahbiche, Souhir and Est{\`eve, Yannick
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
308--318
This paper describes the ON-TRAC Consortium translation systems developed for two challenge tracks featured in the Evaluation Campaign of IWSLT 2022: low-resource and dialect speech translation. For the Tunisian Arabic-English dataset (low-resource and dialect tracks), we build an end-to-end model as our joint primary submission, and compare it against cascaded models that leverage a large fine-tuned wav2vec 2.0 model for ASR. Our results show that in our settings pipeline approaches are still very competitive, and that with the use of transfer learning, they can outperform end-to-end models for speech translation (ST). For the Tamasheq-French dataset (low-resource track) our primary submission leverages intermediate representations from a wav2vec 2.0 model trained on 234 hours of Tamasheq audio, while our contrastive model uses a French phonetic transcription of the Tamasheq audio as input in a Conformer speech translation architecture jointly trained on automatic speech recognition, ST and machine translation losses. Our results highlight that self-supervised models trained on smaller sets of target data are more effective to low-resource end-to-end ST fine-tuning, compared to large off-the-shelf models. Results also illustrate that even approximate phonetic transcriptions can improve ST scores.
null
null
10.18653/v1/2022.iwslt-1.28
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,484
inproceedings
yang-etal-2022-jhu
{JHU} {IWSLT} 2022 Dialect Speech Translation System Description
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.29/
Yang, Jinyi and Hussein, Amir and Wiesner, Matthew and Khudanpur, Sanjeev
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
319--326
This paper details the Johns Hopkins speech translation (ST) system used in the IWLST2022 dialect speech translation task. Our system uses a cascade of automatic speech recognition (ASR) and machine translation (MT). We use a Conformer model for ASR systems and a Transformer model for machine translation. Surprisingly, we found that while using additional ASR training data resulted in only a negligible change in performance as measured by BLEU or word error rate (WER), aggressive text normalization improved BLEU more significantly. We also describe an approach, similar to back-translation, for improving performance using synthetic dialectal source text produced from source sentences in mismatched dialects.
null
null
10.18653/v1/2022.iwslt-1.29
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,485
inproceedings
rippeth-etal-2022-controlling
Controlling Translation Formality Using Pre-trained Multilingual Language Models
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.30/
Rippeth, Elijah and Agrawal, Sweta and Carpuat, Marine
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
327--340
This paper describes the University of Maryland`s submission to the Special Task on Formality Control for Spoken Language Translation at IWSLT, which evaluates translation from English into 6 languages with diverse grammatical formality markers. We investigate to what extent this problem can be addressed with a single multilingual model, simultaneously controlling its output for target language and formality. Results show that this strategy can approach the translation quality and formality control achieved by dedicated translation models. However, the nature of the underlying pre-trained language model and of the finetuning samples greatly impact results.
null
null
10.18653/v1/2022.iwslt-1.30
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,486
inproceedings
vincent-etal-2022-controlling
Controlling Formality in Low-Resource {NMT} with Domain Adaptation and Re-Ranking: {SLT}-{CDT}-{U}o{S} at {IWSLT}2022
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.31/
Vincent, Sebastian and Barrault, Lo{\"ic and Scarton, Carolina
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
341--350
This paper describes the SLT-CDT-UoS group`s submission to the first Special Task on Formality Control for Spoken Language Translation, part of the IWSLT 2022 Evaluation Campaign. Our efforts were split between two fronts: data engineering and altering the objective function for best hypothesis selection. We used language-independent methods to extract formal and informal sentence pairs from the provided corpora; using English as a pivot language, we propagated formality annotations to languages treated as zero-shot in the task; we also further improved formality controlling with a hypothesis re-ranking approach. On the test sets for English-to-German and English-to-Spanish, we achieved an average accuracy of .935 within the constrained setting and .995 within unconstrained setting. In a zero-shot setting for English-to-Russian and English-to-Italian, we scored average accuracy of .590 for constrained setting and .659 for unconstrained.
null
null
10.18653/v1/2022.iwslt-1.31
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,487
inproceedings
zhang-etal-2022-improving-machine
Improving Machine Translation Formality Control with Weakly-Labelled Data Augmentation and Post Editing Strategies
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.32/
Zhang, Daniel and Yu, Jiang and Verma, Pragati and Ganesan, Ashwinkumar and Campbell, Sarah
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
351--360
This paper describes Amazon Alexa AI`s implementation for the IWSLT 2022 shared task on formality control. We focus on the unconstrained and supervised task for en{\textrightarrow}hi (Hindi) and en{\textrightarrow}ja (Japanese) pairs where very limited formality annotated data is available. We propose three simple yet effective post editing strategies namely, T-V conversion, utilizing a verb conjugator and seq2seq models in order to rewrite the translated phrases into formal or informal language. Considering nuances for formality and informality in different languages, our analysis shows that a language-specific post editing strategy achieves the best performance. To address the unique challenge of limited formality annotations, we further develop a formality classifier to perform weakly labelled data augmentation which automatically generates synthetic formality labels from large parallel corpus. Empirical results on the IWSLT formality testset have shown that proposed system achieved significant improvements in terms of formality accuracy while retaining BLEU score on-par with baseline.
null
null
10.18653/v1/2022.iwslt-1.32
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,488
inproceedings
li-etal-2022-hw
{HW}-{TSC}`s Participation in the {IWSLT} 2022 Isometric Spoken Language Translation
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.33/
Li, Zongyao and Guo, Jiaxin and Wei, Daimeng and Shang, Hengchao and Wang, Minghan and Zhu, Ting and Wu, Zhanglin and Yu, Zhengzhe and Chen, Xiaoyu and Lei, Lizhi and Yang, Hao and Qin, Ying
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
361--368
This paper presents our submissions to the IWSLT 2022 Isometric Spoken Language Translation task. We participate in all three language pairs (English-German, English-French, English-Spanish) under the constrained setting, and submit an English-German result under the unconstrained setting. We use the standard Transformer model as the baseline and obtain the best performance via one of its variants that shares the decoder input and output embedding. We perform detailed pre-processing and filtering on the provided bilingual data. Several strategies are used to train our models, such as Multilingual Translation, Back Translation, Forward Translation, R-Drop, Average Checkpoint, and Ensemble. We investigate three methods for biasing the output length: i) conditioning the output to a given target-source length-ratio class; ii) enriching the transformer positional embedding with length information and iii) length control decoding for non-autoregressive translation etc. Our submissions achieve 30.7, 41.6 and 36.7 BLEU respectively on the tst-COMMON test sets for English-German, English-French, English-Spanish tasks and 100{\%} comply with the length requirements.
null
null
10.18653/v1/2022.iwslt-1.33
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,489
inproceedings
wilken-matusov-2022-appteks
{A}pp{T}ek`s Submission to the {IWSLT} 2022 Isometric Spoken Language Translation Task
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.34/
Wilken, Patrick and Matusov, Evgeny
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
369--378
To participate in the Isometric Spoken Language Translation Task of the IWSLT 2022 evaluation, constrained condition, AppTek developed neural Transformer-based systems for English-to-German with various mechanisms of length control, ranging from source-side and target-side pseudo-tokens to encoding of remaining length in characters that replaces positional encoding. We further increased translation length compliance by sentence-level selection of length-compliant hypotheses from different system variants, as well as rescoring of N-best candidates from a single system. Length-compliant back-translated and forward-translated synthetic data, as well as other parallel data variants derived from the original MuST-C training corpus were important for a good quality/desired length trade-off. Our experimental results show that length compliance levels above 90{\%} can be reached while minimizing losses in MT quality as measured in BERT and BLEU scores.
null
null
10.18653/v1/2022.iwslt-1.34
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,490
inproceedings
bhatnagar-etal-2022-hierarchical
Hierarchical Multi-task learning framework for Isometric-Speech Language Translation
Salesky, Elizabeth and Federico, Marcello and Costa-juss{\`a}, Marta
may
2022
Dublin, Ireland (in-person and online)
Association for Computational Linguistics
https://aclanthology.org/2022.iwslt-1.35/
Bhatnagar, Aakash and Bhavsar, Nidhir and Singh, Muskaan and Motlicek, Petr
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
379--385
This paper presents our submission for the shared task on isometric neural machine translation in International Conference on Spoken Language Translation (IWSLT). There are numerous state-of-art models for translation problems. However, these models lack any length constraint to produce short or long outputs from the source text. In this paper, we propose a hierarchical approach to generate isometric translation on MUST-C dataset, we achieve a BERTscore of 0.85, a length ratio of 1.087, a BLEU score of 42.3, and a length range of 51.03{\%}. On the blind dataset provided by the task organizers, we obtain a BERTscore of 0.80, a length ratio of 1.10 and a length range of 47.5{\%}. We have made our code public here \url{https://github.com/aakash0017/Machine-Translation-ISWLT}
null
null
10.18653/v1/2022.iwslt-1.35
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,491
inproceedings
manzoor-petukhova-2022-going
What Is Going through Your Mind? Metacognitive Events Classification in Human-Agent Interactions
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.1/
Manzoor, Hafiza Erum and Petukhova, Volha
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
1--9
For an agent, either human or artificial, to show intelligent interactive behaviour implies assessments of the reliability of own and others' thoughts, feelings and beliefs. Agents capable of these robust evaluations are able to adequately interpret their own and others' cognitive and emotional processes, anticipate future actions, and improve their decision-making and interactive performances across domains and contexts. Reliable instruments to assess interlocutors' mindful capacities for monitoring and regulation - metacognition - in human-agent interaction in real-time and continuously are of crucial importance however challenging to design. The presented study reports Concurrent Think Aloud (CTA) experiments in order to access and evaluate metacognitive dispositions and attitudes of participants in human-agent interactions. A typology of metacognitive events related to the {\textquoteleft}verbalized' monitoring, interpretation, reflection and regulation activities observed in a multimodal dialogue has been designed, and serves as a valid tool to identify relation between participants' behaviour analysed in terms of ISO 24617-2 compliant dialogue acts and the corresponding metacognitive indicators.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,493
inproceedings
stock-etal-2022-assessment
Assessment of Sales Negotiation Strategies with {ISO} 24617-2 Dialogue Act Annotations
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.2/
Stock, Jutta and Petukhova, Volha and Klakow, Dietrich
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
10--19
Call centres endeavour to achieve the highest possible level of transparency with regard to the factors influencing sales success. Existing approaches to the quality assessment of customer-agent sales negotiations do not enable in-depths analysis of sales behaviour. This study addresses this gap and presents a conceptual and operational framework applying the ISO 24617-2 dialogue act annotation scheme, a multidimensional taxonomy of interoperable semantic concepts. We hypothesise that the ISO 24617-2 dialogue act annotation framework adequately supports sales negotiation assessment in the domain of call centre conversations. Authentic call centre conversations are annotated and a range of extensions/modifications are proposed making the annotation scheme better fit this new domain. We concluded that ISO 24617-2 serves as a powerful instrument for the analysis and assessment of sales negotiation and strategies applied by a call centre agent.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,494
inproceedings
stranisci-etal-2022-guidelines
Guidelines and a Corpus for Extracting Biographical Events
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.3/
Stranisci, Marco Antonio and Mensa, Enrico and Damiano, Rossana and Radicioni, Daniele and Diakite, Ousmane
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
20--26
Despite biographies are widely spread within the Semantic Web, resources and approaches to automatically extract biographical events are limited. Such limitation reduces the amount of structured, machine-readable biographical information, especially about people belonging to underrepresented groups. Our work challenges this limitation by providing a set of guidelines for the semantic annotation of life events. The guidelines are designed to be interoperable with existing ISO-standards for semantic annotation: ISO-TimeML (SO-24617-1), and SemAF (ISO-24617-4). Guidelines were tested through an annotation task of Wikipedia biographies of underrepresented writers, namely authors born in non-Western countries, migrants, or belonging to ethnic minorities. 1,000 sentences were annotated by 4 annotators with an average Inter-Annotator Agreement of 0.825. The resulting corpus was mapped on OntoNotes. Such mapping allowed to to expand our corpus, showing that already existing resources may be exploited for the biographical event extraction task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,495
inproceedings
barth-etal-2022-levels
Levels of Non-Fictionality in Fictional Texts
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.4/
Barth, Florian and Varachkina, Hanna and D{\"onicke, Tillmann and G{\"odeke, Luisa
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
27--32
The annotation and automatic recognition of non-fictional discourse within a text is an important, yet unresolved task in literary research. While non-fictional passages can consist of several clauses or sentences, we argue that 1) an entity-level classification of fictionality and 2) the linking of Wikidata identifiers can be used to automatically identify (non-)fictional discourse. We query Wikidata and DBpedia for relevant information about a requested entity as well as the corresponding literary text to determine the entity`s fictionality status and assign a Wikidata identifier, if unequivocally possible. We evaluate our methods on an exemplary text from our diachronic literary corpus, where our methods classify 97{\%} of persons and 62{\%} of locations correctly as fictional or real. Furthermore, 75{\%} of the resolved persons and 43{\%} of the resolved locations are resolved correctly. In a quantitative experiment, we apply the entity-level fictionality tagger to our corpus and conclude that more non-fictional passages can be identified when information about real entities is available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,496
inproceedings
cavar-etal-2022-event
Event Sequencing Annotation with {TIE}-{ML}
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.5/
Cavar, Damir and Aljubailan, Ali and Mompelat, Ludovic and Won, Yuna and Dickson, Billy and Fort, Matthew and Davis, Andrew and Kim, Soyoung
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
33--41
TIE-ML (Temporal Information Event Markup Language) first proposed by Cavar et al. (2021) provides a radically simplified temporal annotation schema for event sequencing and clause level temporal properties even in complex sentences. TIE-ML facilitates rapid annotation of essential tense features at the clause level by labeling simple or periphrastic tense properties, as well as scope relations between clauses, and temporal interpretation at the sentence level. This paper presents the first annotation samples and empirical results. The application of the TIE-ML strategy on the sentences in the Penn Treebank (Marcus et al., 1993) and other non-English language data is discussed in detail. The motivation, insights, and future directions for TIE-ML are discussed, too. The aim is to develop a more efficient annotation strategy and a formalism for clause-level tense and aspect labeling, event sequencing, and tense scope relations that boosts the productivity of tense and event-level corpus annotation. The central goal is to facilitate the production of large data sets for machine learning and quantitative linguistic studies of intra- and cross-linguistic semantic properties of temporal and event logic.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,497
inproceedings
delmonte-busetto-2022-measuring
Measuring Similarity by Linguistic Features rather than Frequency
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.6/
Delmonte, Rodolfo and Busetto, Nicol{\`o}
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
42--52
In the use and creation of current Deep Learning Models the only number that is used for the overall computation is the frequency value associated with the current word form in the corpus, which is used to substitute it. Frequency values come in two forms: absolute and relative. Absolute frequency is used indirectly when selecting the vocabulary against which the word embeddings are created: the cutoff threshold is usually fixed at 30/50K entries of the most frequent words. Relative frequency comes in directly when computing word embeddings based on co-occurrence values of the tokens included in a window size 2/5 adjacent tokens. The latter values are then used to compute similarity, mostly based on cosine distance. In this paper we will evaluate the impact of these two frequency parameters on a small corpus of Italian sentences whose main features are two: presence of very rare words and of non-canonical structures. Rather than basing our evaluation on cosine measure alone, we propose a graded scale of scores which are linguistically motivated. The results computed on the basis of a perusal of BERT`s raw embeddings shows that the two parameters conspire to decide the level of predictability.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,498
inproceedings
dong-etal-2022-testing
Testing the Annotation Consistency of Hallidayan Transitivity Processes: A Multi-variable Structural Approach
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.7/
Dong, Min and Liu, Xiaoyan and Fang, Alex Chengyu
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
53--60
SFL seeks to explain identifiable, observable phenomena of language use in context through the application of a theoretical framework which models language as a functional, meaning making system (Halliday {\&} Matthiessen 2004). Due to the lack of explicit annotation criteria and the divide between conceptual vs. syntactic criteria in practice, it has been a tough job to achieve consistency in the annotation of Hallidayn transitivity processes. The present study proposed that explicit structural and syntactic criteria should be adopted as a basis. Drawing on syntactic and grammatical features as judgement cues, we applied structurally oriented criteria for the annotation of the process categories and participant roles combining a set of interrelated syntactic variables and established the annotation criteria for contextualised circumstantial categories in structural as well as semantic terms. An experiment was carried out to test the usefulness of these annotation criteria, applying percent agreement and Cohen`s kappa as measurements of interrater reliability between the two annotators in each of the five pairs. The results verified our assumptions, albeit rather mildly, and, more significantly, offered some first empirical indications about the practical consistency of transitivity analysis in SFL. In the future work, the research team expect to draw on the insights and experience from some of the ISO standards devoted to semantic annotation such as dialogue acts (Bunt et al. 2012) and semantic roles (ISO-24617-4, 2014).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,499
inproceedings
leal-etal-2022-place
The place of {ISO}-Space in {T}ext2{S}tory multilayer annotation scheme
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.8/
Leal, Ant{\'o}nio and Silvano, Purifica{\c{c}}{\~a}o and Amorim, Evelin and Cantante, In{\^e}s and Silva, F{\'a}tima and Mario Jorge, Al{\'i}pio and Campos, Ricardo
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
61--70
Reasoning about spatial information is fundamental in natural language to fully understand relationships between entities and/or between events. However, the complexity underlying such reasoning makes it hard to represent formally spatial information. Despite the growing interest on this topic, and the development of some frameworks, many problems persist regarding, for instance, the coverage of a wide variety of linguistic constructions and of languages. In this paper, we present a proposal of integrating ISO-Space into a ISO-based multilayer annotation scheme, designed to annotate news in European Portuguese. This scheme already enables annotation at three levels, temporal, referential and thematic, by combining postulates from ISO 24617-1, 4 and 9. Since the corpus comprises news articles, and spatial information is relevant within this kind of texts, a more detailed account of space was required. The main objective of this paper is to discuss the process of integrating ISO-Space with the existing layers of our annotation scheme, assessing the compatibility of the aforementioned parts of ISO 24617, and the problems posed by the harmonization of the four layers and by some specifications of ISO-Space.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,500
inproceedings
lindahl-2022-machines
Do machines dream of artificial agreement?
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.9/
Lindahl, Anna
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
71--75
In this paper the (assumed) inconsistency between F1-scores and annotator agreement measures is discussed. This is exemplified in five corpora from the field of argumentation mining. High agreement is important in most annotation tasks and also often deemed important for an annotated dataset to be useful for machine learning. However, depending on the annotation task, achieving high agreement is not always easy. This is especially true in the field of argumentation mining, because argumentation can be complex as well as implicit. There are also many different models of argumentation, which can be seen in the increasing number of argumentation annotated corpora. Many of these reach moderate agreement but are still used in machine learning tasks, reaching high F1-score. In this paper we describe five corpora, in particular how they have been created and used, to see how they have handled disagreement. We find that agreement can be raised post-production, but that more discussion regarding evaluating and calculating agreement is needed. We conclude that standardisation of the models and the evaluation methods could help such discussions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,501
inproceedings
marini-2022-croatpas
{C}roa{TPAS}: A Survey-based Evaluation
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.10/
Marini, Costanza
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
76--80
The Croatian Typed Predicate Argument Structures resource is a Croatian/English bilingual digital dictionary of corpus-derived verb valency structures, whose argument slots have been annotated with Semantic Types labels following the CPA methodology. CroaTPAS is tailor-made to represent verb polysemy and currently contains 180 Croatian verbs for a total of 683 different verbs senses. In order to evaluate the resource both in terms of identified Croatian verb senses, as well as of the English descriptions explaining them, an online survey based on a multiple-choice sense disambiguation task was devised, pilot tested and distributed among respondents following a snowball sampling methodology. Answers from 30 respondents were collected and compared against a yardstick set of answers in line with CroaTPAS`s sense distinctions. Jaccard similarity index was used as a measure of agreement. Since the multiple-choice items respondents answered to were based on a representative selection of CroaTPAS verbs, they allowed for a generalization of the results to the whole of the resource.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,502
inproceedings
meron-2022-simplifying
Simplifying Semantic Annotations of {SMC}al{F}low
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.11/
Meron, Joram
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
81--85
SMCalFlow (Semantic Machines et al., 2020) is a large corpus of semantically detailed annotations of task-oriented natural dialogues. The annotations use a dataflow approach, in which the annotations are programs which represent user requests. Despite the availability, size and richness of this annotated corpus, it has seen only very limited use in dialogue systems research work, at least in part due to the difficulty in understanding and using the annotations. To address these difficulties, this paper suggests a simplification of the SMCalFlow annotations, as well as releases code needed to inspect the execution of the annotated dataflow programs, which should allow researchers of dialogue systems an easy entry point to experiment with various dataflow based implementations and annotations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,503
inproceedings
sio-morgado-da-costa-2022-multilingual
Multilingual Reference Annotation: A Case between {E}nglish and {M}andarin {C}hinese
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.12/
Sio, Ut Seong and Morgado da Costa, Lu{\'i}s
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
86--94
This paper presents the on-going effort to annotate a cross-lingual corpus on nominal referring expressions in English and Mandarin Chinese. The annotation includes referential forms and referential (information) statuses. We adopt the RefLex annotation scheme (Baumann and Riester, 2012) for the classification of referential statuses. The data focus of this paper is restricted to [the-X] phrases in English (where X stands for any nominal) and their translation equivalents in Mandarin Chinese. The original English and translated Mandarin versions of {\textquoteleft}The Adventure of the Dancing Men' and {\textquoteleft}The Adventure of Speckled Band' from the Sherlock Holmes series were annotated. It contains 1090 instances of [the-X] phrases in English. Our study uncovers the following: (i) bare nouns are the most common Mandarin translation for [the-X] phrases in English, followed by demonstrative phrases, with the exception that when the noun phrase refers to locations/places, in such cases, demonstrative phrases are almost never used; (ii) [the-X] phrases in English are more likely to be translated as demonstrative phrases in Mandarin if they have the referential status of {\textquoteleft}given' (previously mentioned) or {\textquoteleft}given-displaced'(antecedent of an expression occurs earlier than the previous five clauses). In these Mandarin demonstrative phrases, the proximal demonstrative is more often used and it is almost exclusively used for {\textquoteleft}given' while the distal demonstrative can be used for both {\textquoteleft}given' and {\textquoteleft}given-displaced'.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,504
inproceedings
amblard-etal-2022-graph
Graph Querying for Semantic Annotations
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.13/
Amblard, Maxime and Guillaume, Bruno and Pavlova, Siyana and Perrier, Guy
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
95--101
This paper presents how the online tool Grew-match can be used to make queries and visualise data from existing semantically annotated corpora. A dedicated syntax is available to construct simple to complex queries and execute them against a corpus. Such queries give transverse views of the annotated data, this views can help for checking the consistency of annotations in one corpus or across several corpora. Grew-match can then be seen as an error mining tool: when inconsistencies are detected, it helps finding the sentences which should be fixed. Finally, Grew-match can also be used as a side tool to assist annotation task helping to find annotations examples in existing corpora to be compare to the data to be annotated.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,505
inproceedings
bunt-2022-intuitive
Intuitive and Formal Transparency in Annotation Schemes
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.14/
Bunt, Harry
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
102--109
This paper explores the application of the notion of {\textquoteleft}transparency' to annotation schemes, understood as the properties that make it easy for potential users to see the scope of the scheme, the main concepts used in annotations, and the ways these concepts are interrelated. Based on an analysis of annotation schemes in the ISO Semantic Annotation Framework, it is argued that the way these schemes make use of {\textquoteleft}metamodels' is not optimal, since these models are often not entirely clear and not directly related to the formal specification of the scheme. It is shown that by formalizing the relation between metamodels and annotations, by formalizing the relation between metamodels and annotations, both can benefit and can be made simpler, and the annotation scheme becomes intuitively more transparent.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,506
inproceedings
pavlova-etal-2022-much
How much of {UCCA} can be predicted from {AMR}?
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.15/
Pavlova, Siyana and Amblard, Maxime and Guillaume, Bruno
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
110--117
In this paper, we consider two of the currently popular semantic frameworks: Abstract Meaning Representation (AMR) - a more abstract framework, and Universal Conceptual Cognitive Annotation (UCCA) - an anchored framework. We use a corpus-based approach to build two graph rewriting systems, a deterministic and a non-deterministic one, from the former to the latter framework. We present their evaluation and a number of ambiguities that we discovered while building our rules. Finally, we provide a discussion and some future work directions in relation to comparing semantic frameworks of different flavors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,507
inproceedings
moreno-schneider-etal-2022-towards
Towards Practical Semantic Interoperability in {NLP} Platforms
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.16/
Moreno-Schneider, Julian and Calizzano, R{\'e}mi and Kintzel, Florian and Rehm, Georg and Galanis, Dimitris and Roberts, Ian
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
118--126
Interoperability is a necessity for the resolution of complex tasks that require the interconnection of several NLP services. This article presents the approaches that were adopted in three scenarios to address the respective interoperability issues. The first scenario describes the creation of a common REST API for a specific platform, the second scenario presents the interconnection of several platforms via mapping of different representation formats and the third scenario shows the complexities of interoperability through semantic schema mapping or automatic translation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,508
inproceedings
koyano-etal-2022-annotating
Annotating {J}apanese Numeral Expressions for a Logical and Pragmatic Inference Dataset
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.17/
Koyano, Kana and Yanaka, Hitomi and Mineshima, Koji and Bekki, Daisuke
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
127--132
Numeral expressions in Japanese are characterized by the flexibility of quantifier positions and the variety of numeral suffixes. However, little work has been done to build annotated corpora focusing on these features and datasets for testing the understanding of Japanese numeral expressions. In this study, we build a corpus that annotates each numeral expression in an existing phrase structure-based Japanese treebank with its usage and numeral suffix types. We also construct an inference test set for numerical expressions based on this annotated corpus. In this test set, we particularly pay attention to inferences where the correct label differs between logical entailment and implicature and those contexts such as negations and conditionals where the entailment labels can be reversed. The baseline experiment with Japanese BERT models shows that our inference test set poses challenges for inference involving various types of numeral expressions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,509
inproceedings
varvara-etal-2022-annotating
Annotating complex words to investigate the semantics of derivational processes
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.18/
Varvara, Rossella and Salvadori, Justine and Huyghe, Richard
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
133--141
In this paper, we present and test an annotation scheme designed to analyse the semantic properties of derived nouns in context. Aiming at a general semantic comparison of morphological processes, we use a descriptive model that seeks to capture semantic regularities among lexemes and affixes, rather than match occurrences to word sense inventories. We annotate two distinct features of target words: the ontological type of the entity they denote and their semantic relationship with the word they derive from. As illustrated through an annotation experiment on French corpus data, this procedure allows us to highlight semantic differences and similarities between affixes by investigating the number and frequency of their semantic functions, as well as the relation between affix polyfunctionality and lexical ambiguity.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,510
inproceedings
ricchiardi-jezek-2022-annotating
Annotating Propositional Attitude Verbs and their Arguments
Bunt, Harry
jun
2022
Marseille, France
European Language Resources Association
https://aclanthology.org/2022.isa-1.19/
Ricchiardi, Marta and Jezek, Elisabetta
Proceedings of the 18th Joint ACL - ISO Workshop on Interoperable Semantic Annotation within LREC2022
142--149
This paper describes the results of an empirical study on attitude verbs and propositional attitude reports in Italian. Within the framework of a project aiming at acquiring argument structures for Italian verbs from corpora, we carried out a systematic annotation that aims at individuating which verbs are actually attitude verbs in Italian. The result is a list of 179 argument structures based on corpus-derived pattern of use for 126 verbs that behave as attitude verbs. The distribution of these verbs in the corpus suggests that not only the canonical that-clauses, i.e. subordinates introduced by the complementizerte che, but also direct speech, infinitives introduced by the complementizer di, and some nominals are good candidates to express propositional contents in propositional attitude reports. The annotation also enlightens some issues between semantics and ontology, concerning the relation between events and propositions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,511
inproceedings
ding-etal-2022-isotropy
On Isotropy Calibration of Transformer Models
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.1/
Ding, Yue and Martinkus, Karolis and Pascual, Damian and Clematide, Simon and Wattenhofer, Roger
Proceedings of the Third Workshop on Insights from Negative Results in NLP
1--9
Different studies of the embedding space of transformer models suggest that the distribution of contextual representations is highly anisotropic - the embeddings are distributed in a narrow cone. Meanwhile, static word representations (e.g., Word2Vec or GloVe) have been shown to benefit from isotropic spaces. Therefore, previous work has developed methods to calibrate the embedding space of transformers in order to ensure isotropy. However, a recent study (Cai et al. 2021) shows that the embedding space of transformers is locally isotropic, which suggests that these models are already capable of exploiting the expressive capacity of their embedding space. In this work, we conduct an empirical evaluation of state-of-the-art methods for isotropy calibration on transformers and find that they do not provide consistent improvements across models and tasks. These results support the thesis that, given the local isotropy, transformers do not benefit from additional isotropy calibration.
null
null
10.18653/v1/2022.insights-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,513
inproceedings
cignarella-etal-2022-dependency
Do Dependency Relations Help in the Task of Stance Detection?
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.2/
Cignarella, Alessandra Teresa and Bosco, Cristina and Rosso, Paolo
Proceedings of the Third Workshop on Insights from Negative Results in NLP
10--17
In this paper we present a set of multilingual experiments tackling the task of Stance Detection in five different languages: English, Spanish, Catalan, French and Italian. Furthermore, we study the phenomenon of stance with respect to six different targets {--} one per language, and two different for Italian {--} employing a variety of machine learning algorithms that primarily exploit morphological and syntactic knowledge as features, represented throughout the format of Universal Dependencies. Results seem to suggest that the methodology employed is not beneficial per se, but might be useful to exploit the same features with a different methodology.
null
null
10.18653/v1/2022.insights-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,514
inproceedings
khosla-gangadharaiah-2022-evaluating
Evaluating the Practical Utility of Confidence-score based Techniques for Unsupervised Open-world Classification
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.3/
Khosla, Sopan and Gangadharaiah, Rashmi
Proceedings of the Third Workshop on Insights from Negative Results in NLP
18--23
Open-world classification in dialog systems require models to detect open intents, while ensuring the quality of in-domain (ID) intent classification. In this work, we revisit methods that leverage distance-based statistics for unsupervised out-of-domain (OOD) detection. We show that despite their superior performance on threshold-independent metrics like AUROC on test-set, threshold values chosen based on the performance on a validation-set do not generalize well to the test-set, thus resulting in substantially lower performance on ID or OOD detection accuracy and F1-scores. Our analysis shows that this lack of generalizability can be successfully mitigated by setting aside a hold-out set from validation data for threshold selection (sometimes achieving relative gains as high as 100{\%}). Extensive experiments on seven benchmark datasets show that this fix puts the performance of these methods at par with, or sometimes even better than, the current state-of-the-art OOD detection techniques.
null
null
10.18653/v1/2022.insights-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,515
inproceedings
lyu-etal-2022-extending
Extending the Scope of Out-of-Domain: Examining {QA} models in multiple subdomains
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.4/
Lyu, Chenyang and Foster, Jennifer and Graham, Yvette
Proceedings of the Third Workshop on Insights from Negative Results in NLP
24--37
Past work that investigates out-of-domain performance of QA systems has mainly focused on general domains (e.g. news domain, wikipedia domain), underestimating the importance of subdomains defined by the internal characteristics of QA datasets. In this paper, we extend the scope of {\textquotedblleft}out-of-domain{\textquotedblright} by splitting QA examples into different subdomains according to their internal characteristics including question type, text length, answer position. We then examine the performance of QA systems trained on the data from different subdomains. Experimental results show that the performance of QA systems can be significantly reduced when the train data and test data come from different subdomains. These results question the generalizability of current QA systems in multiple subdomains, suggesting the need to combat the bias introduced by the internal characteristics of QA datasets.
null
null
10.18653/v1/2022.insights-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,516
inproceedings
shaham-levy-2022-get
What Do You Get When You Cross Beam Search with Nucleus Sampling?
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.5/
Shaham, Uri and Levy, Omer
Proceedings of the Third Workshop on Insights from Negative Results in NLP
38--45
We combine beam search with the probabilistic pruning technique of nucleus sampling to create two deterministic nucleus search algorithms for natural language generation. The first algorithm, p-exact search, locally prunes the next-token distribution and performs an exact search over the remaining space. The second algorithm, dynamic beam search, shrinks and expands the beam size according to the entropy of the candidate`s probability distribution. Despite the probabilistic intuition behind nucleus search, experiments on machine translation and summarization benchmarks show that both algorithms reach the same performance levels as standard beam search.
null
null
10.18653/v1/2022.insights-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,517
inproceedings
sun-etal-2022-much
How Much Do Modifications to Transformer Language Models Affect Their Ability to Learn Linguistic Knowledge?
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.6/
Sun, Simeng and Dillon, Brian and Iyyer, Mohit
Proceedings of the Third Workshop on Insights from Negative Results in NLP
46--53
Recent progress in large pretrained language models (LMs) has led to a growth of analyses examining what kinds of linguistic knowledge are encoded by these models. Due to computational constraints, existing analyses are mostly conducted on publicly-released LM checkpoints, which makes it difficult to study how various factors during \textit{training} affect the models' acquisition of linguistic knowledge. In this paper, we train a suite of small-scale Transformer LMs that differ from each other with respect to architectural decisions (e.g., self-attention configuration) or training objectives (e.g., multi-tasking, focal loss). We evaluate these LMs on BLiMP, a targeted evaluation benchmark of multiple English linguistic phenomena. Our experiments show that while none of these modifications yields significant improvements on aggregate, changes to the loss function result in promising improvements on several subcategories (e.g., detecting adjunct islands, correctly scoping negative polarity items). We hope our work offers useful insights for future research into designing Transformer LMs that more effectively learn linguistic knowledge.
null
null
10.18653/v1/2022.insights-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,518
inproceedings
munoz-ortiz-etal-2022-cross
Cross-lingual Inflection as a Data Augmentation Method for Parsing
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.7/
Mu{\~n}oz-Ortiz, Alberto and G{\'o}mez-Rodr{\'i}guez, Carlos and Vilares, David
Proceedings of the Third Workshop on Insights from Negative Results in NLP
54--61
We propose a morphology-based method for low-resource (LR) dependency parsing. We train a morphological inflector for target LR languages, and apply it to related rich-resource (RR) treebanks to create cross-lingual (x-inflected) treebanks that resemble the target LR language. We use such inflected treebanks to train parsers in zero- (training on x-inflected treebanks) and few-shot (training on x-inflected and target language treebanks) setups. The results show that the method sometimes improves the baselines, but not consistently.
null
null
10.18653/v1/2022.insights-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,519
inproceedings
zhu-etal-2022-bert
Is {BERT} Robust to Label Noise? A Study on Learning with Noisy Labels in Text Classification
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.8/
Zhu, Dawei and Hedderich, Michael A. and Zhai, Fangzhou and Adelani, David Ifeoluwa and Klakow, Dietrich
Proceedings of the Third Workshop on Insights from Negative Results in NLP
62--67
Incorrect labels in training data occur when human annotators make mistakes or when the data is generated via weak or distant supervision. It has been shown that complex noise-handling techniques - by modeling, cleaning or filtering the noisy instances - are required to prevent models from fitting this label noise. However, we show in this work that, for text classification tasks with modern NLP models like BERT, over a variety of noise types, existing noise-handling methods do not always improve its performance, and may even deteriorate it, suggesting the need for further investigation. We also back our observations with a comprehensive analysis.
null
null
10.18653/v1/2022.insights-1.8
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,520
inproceedings
lent-etal-2022-ancestor
Ancestor-to-Creole Transfer is Not a Walk in the Park
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.9/
Lent, Heather and Bugliarello, Emanuele and S{\o}gaard, Anders
Proceedings of the Third Workshop on Insights from Negative Results in NLP
68--74
We aim to learn language models for Creole languages for which large volumes of data are not readily available, and therefore explore the potential transfer from ancestor languages (the {\textquoteleft}Ancestry Transfer Hypothesis'). We find that standard transfer methods do not facilitate ancestry transfer. Surprisingly, different from other non-Creole languages, a very distinct two-phase pattern emerges for Creoles: As our training losses plateau, and language models begin to overfit on their source languages, perplexity on the Creoles drop. We explore if this compression phase can lead to practically useful language models (the {\textquoteleft}Ancestry Bottleneck Hypothesis'), but also falsify this. Moreover, we show that Creoles even exhibit this two-phase pattern even when training on random, unrelated languages. Thus Creoles seem to be typological outliers and we speculate whether there is a link between the two observations.
null
null
10.18653/v1/2022.insights-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,521
inproceedings
yang-etal-2022-gpt
What {GPT} Knows About Who is Who
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.10/
Yang, Xiaohan and Peynetti, Eduardo and Meerman, Vasco and Tanner, Chris
Proceedings of the Third Workshop on Insights from Negative Results in NLP
75--81
Coreference resolution {--} which is a crucial task for understanding discourse and language at large {--} has yet to witness widespread benefits from large language models (LLMs). Moreover, coreference resolution systems largely rely on supervised labels, which are highly expensive and difficult to annotate, thus making it ripe for prompt engineering. In this paper, we introduce a QA-based prompt-engineering method and discern \textit{generative}, pre-trained LLMs' abilities and limitations toward the task of coreference resolution. Our experiments show that GPT-2 and GPT-Neo can return valid answers, but that their capabilities to identify coreferent mentions are limited and prompt-sensitive, leading to inconsistent results.
null
null
10.18653/v1/2022.insights-1.10
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,522
inproceedings
bajaj-etal-2022-evaluating
Evaluating Biomedical Word Embeddings for Vocabulary Alignment at Scale in the {UMLS} {M}etathesaurus Using {S}iamese Networks
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.11/
Bajaj, Goonmeet and Nguyen, Vinh and Wijesiriwardene, Thilini and Yip, Hong Yung and Javangula, Vishesh and Sheth, Amit and Parthasarathy, Srinivasan and Bodenreider, Olivier
Proceedings of the Third Workshop on Insights from Negative Results in NLP
82--87
Recent work uses a Siamese Network, initialized with BioWordVec embeddings (distributed word embeddings), for predicting synonymy among biomedical terms to automate a part of the UMLS (Unified Medical Language System) Metathesaurus construction process. We evaluate the use of contextualized word embeddings extracted from nine different biomedical BERT-based models for synonym prediction in the UMLS by replacing BioWordVec embeddings with embeddings extracted from each biomedical BERT model using different feature extraction methods. Finally, we conduct a thorough grid search, which prior work lacks, to find the best set of hyperparameters. Surprisingly, we find that Siamese Networks initialized with BioWordVec embeddings still out perform the Siamese Networks initialized with embedding extracted from biomedical BERT model.
null
null
10.18653/v1/2022.insights-1.11
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,523
inproceedings
okimura-etal-2022-impact
On the Impact of Data Augmentation on Downstream Performance in Natural Language Processing
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.12/
Okimura, Itsuki and Reid, Machel and Kawano, Makoto and Matsuo, Yutaka
Proceedings of the Third Workshop on Insights from Negative Results in NLP
88--93
With in the broader scope of machine learning, data augmentation is a common strategy to improve generalization and robustness of machine learning models. While data augmentation has been widely used within computer vision, its use in the NLP has been been comparably rather limited. The reason for this is that within NLP, the impact of proposed data augmentation methods on performance has not been evaluated in a unified manner, and effective data augmentation methods are unclear. In this paper, we look to tackle this by evaluating the impact of 12 data augmentation methods on multiple datasets when finetuning pre-trained language models. We find minimal improvements when data sizes are constrained to a few thousand, with performance degradation when data size is increased. We also use various methods to quantify the strength of data augmentations, and find that these values, though weakly correlated with downstream performance, correlate negatively or positively depending on the task. Furthermore, we find a glaring lack of consistently performant data augmentations. This all alludes to the difficulty of data augmentations for NLP tasks and we are inclined to believe that static data augmentations are not broadly applicable given these properties.
null
null
10.18653/v1/2022.insights-1.12
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,524
inproceedings
ishii-etal-2022-question
Can Question Rewriting Help Conversational Question Answering?
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.13/
Ishii, Etsuko and Xu, Yan and Cahyawijaya, Samuel and Wilie, Bryan
Proceedings of the Third Workshop on Insights from Negative Results in NLP
94--99
Question rewriting (QR) is a subtask of conversational question answering (CQA) aiming to ease the challenges of understanding dependencies among dialogue history by reformulating questions in a self-contained form. Despite seeming plausible, little evidence is available to justify QR as a mitigation method for CQA. To verify the effectiveness of QR in CQA, we investigate a reinforcement learning approach that integrates QR and CQA tasks and does not require corresponding QR datasets for targeted CQA.We find, however, that the RL method is on par with the end-to-end baseline. We provide an analysis of the failure and describe the difficulty of exploiting QR for CQA.
null
null
10.18653/v1/2022.insights-1.13
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,525
inproceedings
rodriguez-etal-2022-clustering
Clustering Examples in Multi-Dataset Benchmarks with Item Response Theory
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.14/
Rodriguez, Pedro and Htut, Phu Mon and Lalor, John and Sedoc, Jo{\~a}o
Proceedings of the Third Workshop on Insights from Negative Results in NLP
100--112
In natural language processing, multi-dataset benchmarks for common tasks (e.g., SuperGLUE for natural language inference and MRQA for question answering) have risen in importance. Invariably, tasks and individual examples vary in difficulty. Recent analysis methods infer properties of examples such as difficulty. In particular, Item Response Theory (IRT) jointly infers example and model properties from the output of benchmark tasks (i.e., scores for each model-example pair). Therefore, it seems sensible that methods like IRT should be able to detect differences between datasets in a task. This work shows that current IRT models are not as good at identifying differences as we would expect, explain why this is difficult, and outline future directions that incorporate more (textual) signal from examples.
null
null
10.18653/v1/2022.insights-1.14
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,526
inproceedings
kim-etal-2022-limits
On the Limits of Evaluating Embodied Agent Model Generalization Using Validation Sets
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.15/
Kim, Hyounghun and Padmakumar, Aishwarya and Jin, Di and Bansal, Mohit and Hakkani-Tur, Dilek
Proceedings of the Third Workshop on Insights from Negative Results in NLP
113--118
Natural language guided embodied task completion is a challenging problem since it requires understanding natural language instructions, aligning them with egocentric visual observations, and choosing appropriate actions to execute in the environment to produce desired changes. We experiment with augmenting a transformer model for this task with modules that effectively utilize a wider field of view and learn to choose whether the next step requires a navigation or manipulation action. We observed that the proposed modules resulted in improved, and in fact state-of-the-art performance on an unseen validation set of a popular benchmark dataset, ALFRED. However, our best model selected using the unseen validation set underperforms on the unseen test split of ALFRED, indicating that performance on the unseen validation set may not in itself be a sufficient indicator of whether model improvements generalize to unseen test sets. We highlight this result as we believe it may be a wider phenomenon in machine learning tasks but primarily noticeable only in benchmarks that limit evaluations on test splits, and highlights the need to modify benchmark design to better account for variance in model performance.
null
null
10.18653/v1/2022.insights-1.15
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,527
inproceedings
surkov-etal-2022-data
Do Data-based Curricula Work?
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.16/
Surkov, Maxim and Mosin, Vladislav and Yamshchikov, Ivan P.
Proceedings of the Third Workshop on Insights from Negative Results in NLP
119--128
Current state-of-the-art NLP systems use large neural networks that require extensive computational resources for training. Inspired by human knowledge acquisition, researchers have proposed curriculum learning - sequencing tasks (task-based curricula) or ordering and sampling the datasets (data-based curricula) that facilitate training. This work investigates the benefits of data-based curriculum learning for large language models such as BERT and T5. We experiment with various curricula based on complexity measures and different sampling strategies. Extensive experiments on several NLP tasks show that curricula based on various complexity measures rarely have any benefits, while random sampling performs either as well or better than curricula.
null
null
10.18653/v1/2022.insights-1.16
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,528
inproceedings
bingyu-arefyev-2022-document
The Document Vectors Using Cosine Similarity Revisited
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.17/
Bingyu, Zhang and Arefyev, Nikolay
Proceedings of the Third Workshop on Insights from Negative Results in NLP
129--133
The current state-of-the-art test accuracy (97.42{\%) on the IMDB movie reviews dataset was reported by Thongtan and Phienthrakul (2019) and achieved by the logistic regression classifier trained on the Document Vectors using Cosine Similarity (DV-ngrams-cosine) proposed in their paper and the Bag-of-N-grams (BON) vectors scaled by Na{\"ive Bayesian weights. While large pre-trained Transformer-based models have shown SOTA results across many datasets and tasks, the aforementioned model has not been surpassed by them, despite being much simpler and pre-trained on the IMDB dataset only. In this paper, we describe an error in the evaluation procedure of this model, which was found when we were trying to analyze its excellent performance on the IMDB dataset. We further show that the previously reported test accuracy of 97.42{\% is invalid and should be corrected to 93.68{\%. We also analyze the model performance with different amounts of training data (subsets of the IMDB dataset) and compare it to the Transformer-based RoBERTa model. The results show that while RoBERTa has a clear advantage for larger training sets, the DV-ngrams-cosine performs better than RoBERTa when the labeled training set is very small (10 or 20 documents). Finally, we introduce a sub-sampling scheme based on Na{\"ive Bayesian weights for the training process of the DV-ngrams-cosine, which leads to faster training and better quality.
null
null
10.18653/v1/2022.insights-1.17
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,529
inproceedings
sorodoc-etal-2022-challenges
Challenges in including extra-linguistic context in pre-trained language models
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.18/
Sorodoc, Ionut and Aina, Laura and Boleda, Gemma
Proceedings of the Third Workshop on Insights from Negative Results in NLP
134--138
To successfully account for language, computational models need to take into account both the linguistic context (the content of the utterances) and the extra-linguistic context (for instance, the participants in a dialogue). We focus on a referential task that asks models to link entity mentions in a TV show to the corresponding characters, and design an architecture that attempts to account for both kinds of context. In particular, our architecture combines a previously proposed specialized module (an {\textquotedblleft}entity library{\textquotedblright}) for character representation with transfer learning from a pre-trained language model. We find that, although the model does improve linguistic contextualization, it fails to successfully integrate extra-linguistic information about the participants in the dialogue. Our work shows that it is very challenging to incorporate extra-linguistic information into pre-trained language models.
null
null
10.18653/v1/2022.insights-1.18
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,530
inproceedings
ying-thomas-2022-label
Label Errors in {BANKING}77
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.19/
Ying, Cecilia and Thomas, Stephen
Proceedings of the Third Workshop on Insights from Negative Results in NLP
139--143
We investigate potential label errors present in the popular BANKING77 dataset and the associated negative impacts on intent classification methods. Motivated by our own negative results when constructing an intent classifier, we applied two automated approaches to identify potential label errors in the dataset. We found that over 1,400 (14{\%}) of the 10,003 training utterances may have been incorrectly labelled. In a simple experiment, we found that by removing the utterances with potential errors, our intent classifier saw an increase of 4.5{\%} and 8{\%} for the F1-Score and Adjusted Rand Index, respectively, in supervised and unsupervised classification. This paper serves as a warning of the potential of noisy labels in popular NLP datasets. Further study is needed to fully identify the breadth and depth of label errors in BANKING77 and other datasets.
null
null
10.18653/v1/2022.insights-1.19
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,531
inproceedings
chen-etal-2022-pathologies
Pathologies of Pre-trained Language Models in Few-shot Fine-tuning
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.20/
Chen, Hanjie and Zheng, Guoqing and Awadallah, Ahmed and Ji, Yangfeng
Proceedings of the Third Workshop on Insights from Negative Results in NLP
144--153
Although adapting pre-trained language models with few examples has shown promising performance on text classification, there is a lack of understanding of where the performance gain comes from. In this work, we propose to answer this question by interpreting the adaptation behavior using post-hoc explanations from model predictions. By modeling feature statistics of explanations, we discover that (1) without fine-tuning, pre-trained models (e.g. BERT and RoBERTa) show strong prediction bias across labels; (2) although few-shot fine-tuning can mitigate the prediction bias and demonstrate promising prediction performance, our analysis shows models gain performance improvement by capturing non-task-related features (e.g. stop words) or shallow data patterns (e.g. lexical overlaps). These observations alert that pursuing model performance with fewer examples may incur pathological prediction behavior, which requires further sanity check on model predictions and careful design in model evaluations in few-shot fine-tuning.
null
null
10.18653/v1/2022.insights-1.20
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,532
inproceedings
kumar-etal-2022-empirical
An Empirical study to understand the Compositional Prowess of Neural Dialog Models
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.21/
Kumar, Vinayshekhar and Kumar, Vaibhav and Bhutani, Mukul and Rudnicky, Alexander
Proceedings of the Third Workshop on Insights from Negative Results in NLP
154--158
In this work, we examine the problems associated with neural dialog models under the common theme of compositionality. Specifically, we investigate three manifestations of compositionality: (1) Productivity, (2) Substitutivity, and (3) Systematicity. These manifestations shed light on the generalization, syntactic robustness, and semantic capabilities of neural dialog models. We design probing experiments by perturbing the training data to study the above phenomenon. We make informative observations based on automated metrics and hope that this work increases research interest in understanding the capacity of these models.
null
null
10.18653/v1/2022.insights-1.21
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,533
inproceedings
alexeeva-etal-2022-combining
Combining Extraction and Generation for Constructing Belief-Consequence Causal Links
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.22/
Alexeeva, Maria and Beal Cohen, Allegra A. and Surdeanu, Mihai
Proceedings of the Third Workshop on Insights from Negative Results in NLP
159--164
In this paper, we introduce and justify a new task{---}causal link extraction based on beliefs{---}and do a qualitative analysis of the ability of a large language model{---}InstructGPT-3{---}to generate implicit consequences of beliefs. With the language model-generated consequences being promising, but not consistent, we propose directions of future work, including data collection, explicit consequence extraction using rule-based and language modeling-based approaches, and using explicitly stated consequences of beliefs to fine-tune or prompt the language model to produce outputs suitable for the task.
null
null
10.18653/v1/2022.insights-1.22
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,534
inproceedings
mieskes-2022-replicability
Replicability under Near-Perfect Conditions {--} A Case-Study from Automatic Summarization
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.23/
Mieskes, Margot
Proceedings of the Third Workshop on Insights from Negative Results in NLP
165--171
Replication of research results has become more and more important in Natural Language Processing. Nevertheless, we still rely on results reported in the literature for comparison. Additionally, elements of an experimental setup are not always completely reported. This includes, but is not limited to reporting specific parameters used or omitting an implementational detail. In our experiment based on two frequently used data sets from the domain of automatic summarization and the seemingly full disclosure of research artefacts, we examine how well results reported are replicable and what elements influence the success or failure of replication. Our results indicate that publishing research artifacts is far from sufficient, that that publishing all relevant parameters in all possible detail is cruicial.
null
null
10.18653/v1/2022.insights-1.23
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,535
inproceedings
kumar-thawani-2022-bpe
{BPE} beyond Word Boundary: How {NOT} to use Multi Word Expressions in Neural Machine Translation
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.24/
Kumar, Dipesh and Thawani, Avijit
Proceedings of the Third Workshop on Insights from Negative Results in NLP
172--179
BPE tokenization merges characters into longer tokens by finding frequently occurring \textbf{contiguous} patterns \textbf{within} the word boundary. An intuitive relaxation would be to extend a BPE vocabulary with multi-word expressions (MWEs): bigrams ($in\_a$), trigrams ($out\_of\_the$), and skip-grams ($he . his$). In the context of Neural Machine Translation (NMT), we replace the least frequent subword/whole-word tokens with the most frequent MWEs. We find that these modifications to BPE end up hurting the model, resulting in a net drop of BLEU and chrF scores across two language pairs. We observe that naively extending BPE beyond word boundaries results in incoherent tokens which are themselves better represented as individual words. Moreover, we find that Pointwise Mutual Information (PMI) instead of frequency finds better MWEs (e.g., $New\_York$, $Statue\_of\_Liberty$, $neither . nor$) which consistently improves translation performance.We release all code at \url{https://github.com/pegasus-lynx/mwe-bpe}.
null
null
10.18653/v1/2022.insights-1.24
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,536
inproceedings
koch-etal-2022-pre
Pre-trained language models evaluating themselves - A comparative study
Tafreshi, Shabnam and Sedoc, Jo{\~a}o and Rogers, Anna and Drozd, Aleksandr and Rumshisky, Anna and Akula, Arjun
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.insights-1.25/
Koch, Philipp and A{\ss}enmacher, Matthias and Heumann, Christian
Proceedings of the Third Workshop on Insights from Negative Results in NLP
180--187
Evaluating generated text received new attention with the introduction of model-based metrics in recent years. These new metrics have a higher correlation with human judgments and seemingly overcome many issues of previous n-gram based metrics from the symbolic age. In this work, we examine the recently introduced metrics BERTScore, BLEURT, NUBIA, MoverScore, and Mark-Evaluate (Petersen). We investigate their sensitivity to different types of semantic deterioration (part of speech drop and negation), word order perturbations, word drop, and the common problem of repetition. No metric showed appropriate behaviour for negation, and further none of them was overall sensitive to the other issues mentioned above.
null
null
10.18653/v1/2022.insights-1.25
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,537
inproceedings
sym-etal-2022-blab
{BLAB} Reporter: Automated journalism covering the Blue {A}mazon
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-demos.1/
Sym, Yan and Campos, Jo{\~a}o and Cozman, Fabio
Proceedings of the 15th International Conference on Natural Language Generation: System Demonstrations
1--3
This demo paper introduces BLAB reporter, a robot-journalist system covering the Brazilian Blue Amazon. The application is based on a pipeline architecture for Natural Language Generation, which offers daily reports, news summaries and curious facts in Brazilian Portuguese. By collecting, storing and analysing structured data from publicly available sources, the robot-journalist uses domain knowledge to generate, validate and publish texts in Twitter. Code and corpus are publicly available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,565
inproceedings
garcia-silva-etal-2022-generating
Generating Quizzes to Support Training on Quality Management and Assurance in Space Science and Engineering
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-demos.2/
Garcia-Silva, Andres and Berrio Aroca, Cristian and Gomez-Perez, Jose Manuel and Martinez, Jose and Fleith, Patrick and Scaglioni, Stefano
Proceedings of the 15th International Conference on Natural Language Generation: System Demonstrations
4--6
Quality management and assurance is key for space agencies to guarantee the success of space missions, which are high-risk and extremely costly. In this paper, we present a system to generate quizzes, a common resource to evaluate the effectiveness of training sessions, from documents about quality assurance procedures in the Space domain. Our system leverages state of the art auto-regressive models like T5 and BART to generate questions, and a RoBERTa model to extract answers for such questions, thus verifying their suitability.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,566
inproceedings
kadam-etal-2022-automated
Automated Ad Creative Generation
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-demos.3/
Kadam, Vishakha and Jin, Yiping and Nguyen-Hoang, Bao-Dai
Proceedings of the 15th International Conference on Natural Language Generation: System Demonstrations
7--9
Ad creatives are ads served to users on a webpage, app, or other digital environments. The demand for compelling ad creatives surges drastically with the ever-increasing popularity of digital marketing. The two most essential elements of (display) ad creatives are the advertising message, such as headlines and description texts, and the visual component, such as images and videos. Traditionally, ad creatives are composed by professional copywriters and creative designers. The process requires significant human effort, limiting the scalability and efficiency of digital ad campaigns. This work introduces AUTOCREATIVE, a novel system to automatically generate ad creatives relying on natural language generation and computer vision techniques. The system generates multiple ad copies (ad headlines/description texts) using a sequence-to-sequence model and selects images most suitable to the generated ad copies based on heuristic-based visual appeal metrics and a text-image retrieval pipeline.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,567
inproceedings
rosa-etal-2022-theaitrobot
{THE}ai{TR}obot: An Interactive Tool for Generating Theatre Play Scripts
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-demos.4/
Rosa, Rudolf and Schmidtov{\'a}, Patr{\'i}cia and Zakhtarenko, Alisa and Dusek, Ondrej and Musil, Tom{\'a}{\v{s}} and Mare{\v{c}}ek, David and Ul Islam, Saad and Novakova, Marie and Vosecka, Klara and Hrbek, Daniel and Kostak, David
Proceedings of the 15th International Conference on Natural Language Generation: System Demonstrations
10--13
We present a free online demo of THEaiTRobot, an open-source bilingual tool for interactively generating theatre play scripts, in two versions. THEaiTRobot 1.0 uses the GPT-2 language model with minimal adjustments. THEaiTRobot 2.0 uses two models created by fine-tuning GPT-2 on purposefully collected and processed datasets and several other components, generating play scripts in a hierarchical fashion (title $\rightarrow$ synopsis $\rightarrow$ script). The underlying tool is used in the THEaiTRE project to generate scripts for plays, which are then performed on stage by a professional theatre.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,568
inproceedings
ghosal-etal-2022-second
The Second Automatic Minuting ({A}uto{M}in) Challenge: Generating and Evaluating Minutes from Multi-Party Meetings
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.1/
Ghosal, Tirthankar and Hled{\'i}kov{\'a}, Marie and Singh, Muskaan and Nedoluzhko, Anna and Bojar, Ond{\v{r}}ej
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
1--11
We would host the AutoMin generation chal- lenge at INLG 2023 as a follow-up of the first AutoMin shared task at Interspeech 2021. Our shared task primarily concerns the automated generation of meeting minutes from multi-party meeting transcripts. In our first venture, we ob- served the difficulty of the task and highlighted a number of open problems for the community to discuss, attempt, and solve. Hence, we invite the Natural Language Generation (NLG) com- munity to take part in the second iteration of AutoMin. Like the first, the second AutoMin will feature both English and Czech meetings and the core task of summarizing the manually- revised transcripts into bulleted minutes. A new challenge we are introducing this year is to devise efficient metrics for evaluating the quality of minutes. We will also host an optional track to generate minutes for European parliamentary sessions. We carefully curated the datasets for the above tasks. Our ELITR Minuting Corpus has been recently accepted to LREC 2022 and publicly released. We are already preparing a new test set for evaluating the new shared tasks. We hope to carry forward the learning from the first AutoMin and instigate more community attention and interest in this timely yet chal- lenging problem. INLG, the premier forum for the NLG community, would be an appropriate venue to discuss the challenges and future of Automatic Minuting. The main objective of the AutoMin GenChal at INLG 2023 would be to come up with efficient methods to auto- matically generate meeting minutes and design evaluation metrics to measure the quality of the minutes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,570
inproceedings
chen-etal-2022-cross
The Cross-lingual Conversation Summarization Challenge
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.2/
Chen, Yulong and Zhong, Ming and Bai, Xuefeng and Deng, Naihao and Li, Jing and Zhu, Xianchao and Zhang, Yue
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
12--18
We propose the shared task of cross-lingual conversation summarization, ConvSumX Challenge, opening new avenues for researchers to investigate solutions that integrate conversation summarization and machine translation. This task can be particularly useful due to the emergence of online meetings and conferences. We use a new benchmark, covering 2 real-world scenarios and 3 language directions, including a low-resource language, for evaluation. We hope that ConvSumX can motivate research to go beyond English and break the barrier for non-English speakers to benefit from recent advances of conversation summarization.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,571
inproceedings
srivastava-singh-2022-hinglisheval
{H}inglish{E}val Generation Challenge on Quality Estimation of Synthetic Code-Mixed Text: Overview and Results
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.3/
Srivastava, Vivek and Singh, Mayank
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
19--25
We hosted a shared task to investigate the factors influencing the quality of the code- mixed text generation systems. The teams experimented with two systems that gener- ate synthetic code-mixed Hinglish sentences. They also experimented with human ratings that evaluate the generation quality of the two systems. The first-of-its-kind, proposed sub- tasks, (i) quality rating prediction and (ii) an- notators' disagreement prediction of the syn- thetic Hinglish dataset made the shared task quite popular among the multilingual research community. A total of 46 participants com- prising 23 teams from 18 institutions reg- istered for this shared task. The detailed description of the task and the leaderboard is available at \url{https://codalab.lisn.upsaclay.fr/competitions/1688}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,572
inproceedings
kodali-etal-2022-precogiiith
{P}re{C}og{IIITH} at {H}inglish{E}val : Leveraging Code-Mixing Metrics {\&} Language Model Embeddings To Estimate Code-Mix Quality
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.4/
Kodali, Prashant and Sachan, Tanmay and Goindani, Akshay and Goel, Anmol and Ahuja, Naman and Shrivastava, Manish and Kumaraguru, Ponnurangam
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
26--30
Code-Mixing is a phenomenon of mixing two or more languages in a speech event and is prevalent in multilingual societies. Given the low-resource nature of Code-Mixing, machine generation of code-mixed text is a prevalent approach for data augmentation. However, evaluating the quality of such machine gen- erated code-mixed text is an open problem. In our submission to HinglishEval, a shared- task collocated with INLG2022, we attempt to build models factors that impact the quality of synthetically generated code-mix text by pre- dicting ratings for code-mix quality. Hingli- shEval Shared Task consists of two sub-tasks - a) Quality rating prediction); b) Disagree- ment prediction. We leverage popular code- mixed metrics and embeddings of multilin- gual large language models (MLLMs) as fea- tures, and train task specific MLP regression models. Our approach could not beat the baseline results. However, for Subtask-A our team ranked a close second on F-1 and Co- hen`s Kappa Score measures and first for Mean Squared Error measure. For Subtask-B our ap- proach ranked third for F1 score, and first for Mean Squared Error measure. Code of our submission can be accessed here.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,573
inproceedings
singh-2022-niksss-hinglisheval
niksss at {H}inglish{E}val: Language-agnostic {BERT}-based Contextual Embeddings with Catboost for Quality Evaluation of the Low-Resource Synthetically Generated Code-Mixed {H}inglish Text
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.5/
Singh, Nikhil
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
31--34
This paper describes the system description for the HinglishEval challenge at INLG 2022. The goal of this task was to investigate the factors influencing the quality of the code- mixed text generation system. The task was divided into two subtasks, quality rating prediction and annotators' disagreement prediction of the synthetic Hinglish dataset. We attempted to solve these tasks using sentence-level embeddings, which are obtained from mean pooling the contextualized word embeddings for all input tokens in our text. We experimented with various classifiers on top of the embeddings produced for respective tasks. Our best-performing system ranked 1st on subtask B and 3rd on subtask A. We make our code available here: \url{https://github.com/nikhilbyte/Hinglish-qEval}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,574
inproceedings
furniturewala-etal-2022-bits
{BITS} Pilani at {H}inglish{E}val: Quality Evaluation for Code-Mixed {H}inglish Text Using Transformers
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.6/
Furniturewala, Shaz and Kumari, Vijay and Dash, Amulya Ratna and Kedia, Hriday and Sharma, Yashvardhan
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
35--38
Code-Mixed text data consists of sentences having words or phrases from more than one language. Most multi-lingual communities worldwide communicate using multiple languages, with English usually one of them. Hinglish is a Code-Mixed text composed of Hindi and English but written in Roman script. This paper aims to determine the factors influencing the quality of Code-Mixed text data generated by the system. For the HinglishEval task, the proposed model uses multilingual BERT to find the similarity between synthetically generated and human-generated sentences to predict the quality of synthetically generated Hinglish sentences.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,575
inproceedings
guha-etal-2022-ju
{JU}{\_}{NLP} at {H}inglish{E}val: Quality Evaluation of the Low-Resource Code-Mixed {H}inglish Text
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.7/
Guha, Prantik and Dhar, Rudra and Das, Dipankar
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
39--42
In this paper we describe a system submitted to the INLG 2022 Generation Challenge (GenChal) on Quality Evaluation of the Low-Resource Synthetically Generated Code-Mixed Hinglish Text. We implement a Bi-LSTM-based neural network model to predict the Average rating score and Disagreement score of the synthetic Hinglish dataset. In our models, we used word embeddings for English and Hindi data, and one hot encodings for Hinglish data. We achieved a F1 score of 0.11, and mean squared error of 6.0 in the average rating score prediction task. In the task of Disagreement score prediction, we achieve a F1 score of 0.18, and mean squared error of 5.0.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,576
inproceedings
belz-etal-2022-2022
The 2022 {R}epro{G}en Shared Task on Reproducibility of Evaluations in {NLG}: Overview and Results
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.8/
Belz, Anya and Shimorina, Anastasia and Popovi{\'c}, Maja and Reiter, Ehud
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
43--51
Against a background of growing interest in reproducibility in NLP and ML, and as part of an ongoing research programme designed to develop theory and practice of reproducibility assessment in NLP, we organised the second shared task on reproducibility of evaluations in NLG, ReproGen 2022. This paper describes the shared task, summarises results from the reproduction studies submitted, and provides further comparative analysis of the results. Out of six initial team registrations, we received submissions from five teams. Meta-analysis of the five reproduction studies revealed varying degrees of reproducibility, and allowed further tentative conclusions about what types of evaluation tend to have better reproducibility.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,577
inproceedings
huidrom-etal-2022-two
Two Reproductions of a Human-Assessed Comparative Evaluation of a Semantic Error Detection System
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.9/
Huidrom, Rudali and Du{\v{s}}ek, Ond{\v{r}}ej and Kasner, Zden{\v{e}}k and Castro Ferreira, Thiago and Belz, Anya
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
52--61
In this paper, we present the results of two reproduction studies for the human evaluation originally reported by Du{\v{s}}ek and Kasner (2020) in which the authors comparatively evaluated outputs produced by a semantic error detection system for data-to-text generation against reference outputs. In the first reproduction, the original evaluators repeat the evaluation, in a test of the repeatability of the original evaluation. In the second study, two new evaluators carry out the evaluation task, in a test of the reproducibility of the original evaluation under otherwise identical conditions. We describe our approach to reproduction, and present and analyse results, finding different degrees of reproducibility depending on result type, data and labelling task. Our resources are available and open-sourced.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,578
inproceedings
arvan-etal-2022-reproducibility
Reproducibility of Exploring Neural Text Simplification Models: A Review
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.10/
Arvan, Mohammad and Pina, Lu{\'i}s and Parde, Natalie
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
62--70
The reproducibility of NLP research has drawn increased attention over the last few years. Several tools, guidelines, and metrics have been introduced to address concerns in regard to this problem; however, much work still remains to ensure widespread adoption of effective reproducibility standards. In this work, we review the reproducibility of Exploring Neural Text Simplification Models by Nisioi et al. (2017), evaluating it from three main aspects: data, software artifacts, and automatic evaluations. We discuss the challenges and issues we faced during this process. Furthermore, we explore the adequacy of current reproducibility standards. Our code, trained models, and a docker container of the environment used for training and evaluation are made publicly available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,579
inproceedings
thomson-reiter-2022-accuracy
The Accuracy Evaluation Shared Task as a Retrospective Reproduction Study
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.11/
Thomson, Craig and Reiter, Ehud
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
71--79
We investigate the data collected for the Accuracy Evaluation Shared Task as a retrospective reproduction study. The shared task was based upon errors found by human annotation of computer generated summaries of basketball games. Annotation was performed in three separate stages, with texts taken from the same three systems and checked for errors by the same three annotators. We show that the mean count of errors was consistent at the highest level for each experiment, with increased variance when looking at per-system and/or per-error- type breakdowns.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,580
inproceedings
popovic-etal-2022-reproducing
Reproducing a Manual Evaluation of the Simplicity of Text Simplification System Outputs
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.12/
Popovi{\'c}, Maja and Castilho, Sheila and Huidrom, Rudali and Belz, Anya
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
80--85
In this paper we describe our reproduction study of the human evaluation of text simplic- ity reported by Nisioi et al. (2017). The work was carried out as part of the ReproGen Shared Task 2022 on Reproducibility of Evaluations in NLG. Our aim was to repeat the evaluation of simplicity for nine automatic text simplification systems with a different set of evaluators. We describe our experimental design together with the known aspects of the original experimental design and present the results from both studies. Pearson correlation between the original and reproduction scores is moderate to high (0.776). Inter-annotator agreement in the reproduction study is lower (0.40) than in the original study (0.66). We discuss challenges arising from the unavailability of certain aspects of the origi- nal set-up, and make several suggestions as to how reproduction of similar evaluations can be made easier in future.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,581
inproceedings
braggaar-etal-2022-reproduction
A reproduction study of methods for evaluating dialogue system output: Replicating Santhanam and Shaikh (2019)
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.13/
Braggaar, Anouck and Tomas, Fr{\'e}d{\'e}ric and Blomsma, Peter and Hommes, Saar and Braun, Nadine and van Miltenburg, Emiel and van der Lee, Chris and Goudbeek, Martijn and Krahmer, Emiel
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
86--93
In this paper, we describe our reproduction ef- fort of the paper: Towards Best Experiment Design for Evaluating Dialogue System Output by Santhanam and Shaikh (2019) for the 2022 ReproGen shared task. We aim to produce the same results, using different human evaluators, and a different implementation of the automatic metrics used in the original paper. Although overall the study posed some challenges to re- produce (e.g. difficulties with reproduction of automatic metrics and statistics), in the end we did find that the results generally replicate the findings of Santhanam and Shaikh (2019) and seem to follow similar trends.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,582
inproceedings
chen-etal-2022-dialogsum
{D}ialog{S}um Challenge: Results of the Dialogue Summarization Shared Task
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.14/
Chen, Yulong and Deng, Naihao and Liu, Yang and Zhang, Yue
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
94--103
We report the results of DialogSum Challenge, the shared task on summarizing real-life sce- nario dialogues at INLG 2022. Four teams participate in this shared task and three submit their system reports, exploring different meth- ods to improve the performance of dialogue summarization. Although there is a great im- provement over the baseline models regarding automatic evaluation metrics, such as ROUGE scores, we find that there is a salient gap be- tween model generated outputs and human an- notated summaries by human evaluation from multiple aspects. These findings demonstrate the difficulty of dialogue summarization and suggest that more fine-grained evaluatuion met- rics are in need.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,583
inproceedings
chauhan-etal-2022-tcs
{TCS}{\_}{WITM}{\_}2022 @ {D}ialog{S}um : Topic oriented Summarization using Transformer based Encoder Decoder Model
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.15/
Chauhan, Vipul and Roy, Prasenjeet and Dey, Lipika and Goel, Tushar
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
104--109
In this paper, we present our approach to the DialogSum challenge, which was proposed as a shared task aimed to summarize dialogues from real-life scenarios. The challenge was to design a system that can generate fluent and salient summaries of a multi-turn dialogue text. Dialogue summarization has many commercial applications as it can be used to summarize conversations between customers and service agents, meeting notes, conference proceedings etc. Appropriate dialogue summarization can enhance the experience of conversing with chat- bots or personal digital assistants. We have pro- posed a topic-based abstractive summarization method, which is generated by fine-tuning PE- GASUS1, which is the state of the art abstrac- tive summary generation model. We have com- pared different types of fine-tuning approaches that can lead to different types of summaries. We found that since conversations usually veer around a topic, using topics along with the di- aloagues, helps to generate more human-like summaries. The topics in this case resemble user perspective, around which summaries are usually sought. The generated summary has been evaluated with ground truth summaries provided by the challenge owners. We use the py-rouge score and BERT-Score metrics to compare the results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,584
inproceedings
bhattacharjee-etal-2022-multi
A Multi-Task Learning Approach for Summarization of Dialogues
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.16/
Bhattacharjee, Saprativa and Shinde, Kartik and Ghosal, Tirthankar and Ekbal, Asif
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
110--120
We describe our multi-task learning based ap- proach for summarization of real-life dialogues as part of the DialogSum Challenge shared task at INLG 2022. Our approach intends to im- prove the main task of abstractive summariza- tion of dialogues through the auxiliary tasks of extractive summarization, novelty detection and language modeling. We conduct extensive experimentation with different combinations of tasks and compare the results. In addition, we also incorporate the topic information provided with the dataset to perform topic-aware sum- marization. We report the results of automatic evaluation of the generated summaries in terms of ROUGE and BERTScore.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,585
inproceedings
lundberg-etal-2022-dialogue
Dialogue Summarization using {BART}
Shaikh, Samira and Ferreira, Thiago and Stent, Amanda
jul
2022
Waterville, Maine, USA and virtual meeting
Association for Computational Linguistics
https://aclanthology.org/2022.inlg-genchal.17/
Lundberg, Conrad and S{\'a}nchez Vi{\~n}uela, Leyre and Biales, Siena
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
121--125
This paper introduces the model and settings submitted to the INLG 2022 DialogSum Chal- lenge, a shared task to generate summaries of real-life scenario dialogues between two peo- ple. In this paper, we explored using interme- diate task transfer learning, reported speech, and the use of a supplementary dataset in addi- tion to our base fine-tuned BART model. How- ever, we did not use such a method in our final model, as none improved our results. Our final model for this dialogue task achieved scores only slightly below the top submission, with hidden test set scores of 49.62, 24.98, 46.25 and 91.54 for ROUGE-1, ROUGE-2, ROUGE-L and BERTSCORE respectively. The top submitted models will also receive human evaluation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,586
inproceedings
schneider-etal-2022-data
Data-to-text systems as writing environment
Huang, Ting-Hao 'Kenneth' and Raheja, Vipul and Kang, Dongyeop and Chung, John Joon Young and Gissin, Daniel and Lee, Mina and Gero, Katy Ilonka
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.in2writing-1.1/
Schneider, Adela and Madsack, Andreas and Heininger, Johanna and Chen, Ching-Yi and Wei{\ss}graeber, Robert
Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022)
1--10
Today, data-to-text systems are used as commercial solutions for automated text productionof large quantities of text. Therefore, they already represent a new technology of writing. This new technology requires the author, asan act of writing, both to configure a systemthat then takes over the transformation into areal text, but also to maintain strategies of traditional writing. What should an environmentlook like, where a human guides a machineto write texts? Based on a comparison of theNLG pipeline architecture with the results ofthe research on the human writing process, thispaper attempts to take an overview of whichtasks need to be solved and which strategiesare necessary to produce good texts in this environment. From this synopsis, principles for thedesign of data-to-text systems as a functioningwriting environment are then derived.
null
null
10.18653/v1/2022.in2writing-1.1
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,588
inproceedings
gero-etal-2022-design
A Design Space for Writing Support Tools Using a Cognitive Process Model of Writing
Huang, Ting-Hao 'Kenneth' and Raheja, Vipul and Kang, Dongyeop and Chung, John Joon Young and Gissin, Daniel and Lee, Mina and Gero, Katy Ilonka
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.in2writing-1.2/
Gero, Katy and Calderwood, Alex and Li, Charlotte and Chilton, Lydia
Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022)
11--24
Improvements in language technology have led to an increasing interest in writing support tools. In this paper we propose a design space for such tools based on a cognitive process model of writing. We conduct a systematic review of recent computer science papers that present and/or study such tools, analyzing 30 papers from the last five years using the design space. Tools are plotted according to three distinct cognitive processes{--}planning, translating, and reviewing{--}and the level of constraint each process entails. Analyzing recent work with the design space shows that highly constrained planning and reviewing are under-studied areas that recent technology improvements may now be able to serve. Finally, we propose shared evaluation methodologies and tasks that may help the field mature.
null
null
10.18653/v1/2022.in2writing-1.2
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,589
inproceedings
singh-etal-2022-selective
A Selective Summary of Where to Hide a Stolen Elephant: Leaps in Creative Writing with Multimodal Machine Intelligence
Huang, Ting-Hao 'Kenneth' and Raheja, Vipul and Kang, Dongyeop and Chung, John Joon Young and Gissin, Daniel and Lee, Mina and Gero, Katy Ilonka
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.in2writing-1.3/
Singh, Nikhil and Bernal, Guillermo and Savchenko, Daria and Glassman, Elena
Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022)
25--26
While developing a story, novices and published writers alike have had to look outside themselves for inspiration. Language models have recently been able to generate text fluently, producing new stochastic narratives upon request. However, effectively integrating such capabilities with human cognitive faculties and creative processes remains challenging. We propose to investigate this integration with a multimodal writing support interface that offers writing suggestions textually, visually, and aurally. We conduct an extensive study that combines elicitation of prior expectations before writing, observation and semi-structured interviews during writing, and outcome evaluations after writing. Our results illustrate individual and situational variation in machine-in-the-loop writing approaches, suggestion acceptance, and ways the system is helpful. Centrally, we report how participants perform integrative leaps, by which they do cognitive work to integrate suggestions of varying semantic relevance into their developing stories. We interpret these findings, offering modeling and design recommendations for future creative writing support technologies.
null
null
10.18653/v1/2022.in2writing-1.3
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,590
inproceedings
steinmetz-harbusch-2022-text
A text-writing system for Easy-to-Read {G}erman evaluated with low-literate users with cognitive impairment
Huang, Ting-Hao 'Kenneth' and Raheja, Vipul and Kang, Dongyeop and Chung, John Joon Young and Gissin, Daniel and Lee, Mina and Gero, Katy Ilonka
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.in2writing-1.4/
Steinmetz, Ina and Harbusch, Karin
Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022)
27--38
Low-literate users with intellectual or developmental disabilities (IDD) and/or complex communication needs (CCN) require specific writing support. We present a system that interactively supports fast and correct writing of a variant of Leichte Sprache (LS; German term for easy-to-read German), slightly extended within and beyond the inner-sentential syntactic level. The system provides simple and intuitive dialogues for selecting options from a natural-language paraphrase generator. Moreover, it reminds the user to add text elements enhancing understandability, audience design, and text coherence. In earlier development phases, the system was evaluated with different groups of substitute users. Here, we report a case study with seven low-literate users with IDD.
null
null
10.18653/v1/2022.in2writing-1.4
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,591
inproceedings
wiegmann-etal-2022-language
Language Models as Context-sensitive Word Search Engines
Huang, Ting-Hao 'Kenneth' and Raheja, Vipul and Kang, Dongyeop and Chung, John Joon Young and Gissin, Daniel and Lee, Mina and Gero, Katy Ilonka
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.in2writing-1.5/
Wiegmann, Matti and V{\"olske, Michael and Stein, Benno and Potthast, Martin
Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022)
39--45
Context-sensitive word search engines are writing assistants that support word choice, phrasing, and idiomatic language use by indexing large-scale n-gram collections and implementing a wildcard search. However, search results become unreliable with increasing context size (e.g., n{\ensuremath{>}}=5), when observations become sparse. This paper proposes two strategies for word search with larger n, based on masked and conditional language modeling. We build such search engines using BERT and BART and compare their capabilities in answering English context queries with those of the n-gram-based word search engine Netspeak. Our proposed strategies score within 5 percentage points MRR of n-gram collections while answering up to 5 times as many queries.
null
null
10.18653/v1/2022.in2writing-1.5
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,592
inproceedings
mori-etal-2022-plug
Plug-and-Play Controller for Story Completion: A Pilot Study toward Emotion-aware Story Writing Assistance
Huang, Ting-Hao 'Kenneth' and Raheja, Vipul and Kang, Dongyeop and Chung, John Joon Young and Gissin, Daniel and Lee, Mina and Gero, Katy Ilonka
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.in2writing-1.6/
Mori, Yusuke and Yamane, Hiroaki and Shimizu, Ryohei and Harada, Tatsuya
Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022)
46--57
Emotions are essential for storytelling and narrative generation, and as such, the relationship between stories and emotions has been extensively studied. The authors of this paper, including a professional novelist, have examined the use of natural language processing to address the problems of novelists from the perspective of practical creative writing. In particular, the story completion task, which requires understanding the existing unfinished context, was studied from the perspective of creative support for human writers, to generate appropriate content to complete the unfinished parts. It was found that unsupervised pre-trained large neural models of the sequence-to-sequence type are useful for this task. Furthermore, based on the plug-and-play module for controllable text generation using GPT-2, an additional module was implemented to consider emotions. Although this is a preliminary study, and the results leave room for improvement before incorporating the model into a practical system, this effort is an important step in complementing the emotional trajectory of the story.
null
null
10.18653/v1/2022.in2writing-1.6
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,593
inproceedings
li-etal-2022-text
Text Revision by On-the-Fly Representation Optimization
Huang, Ting-Hao 'Kenneth' and Raheja, Vipul and Kang, Dongyeop and Chung, John Joon Young and Gissin, Daniel and Lee, Mina and Gero, Katy Ilonka
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.in2writing-1.7/
Li, Jingjing and Li, Zichao and Ge, Tao and King, Irwin and Lyu, Michael
Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022)
58--59
Text revision refers to a family of natural language generation tasks, where the source and target sequences share moderate resemblance in surface form but differentiate in attributes, such as text formality and simplicity. Current state-of-the-art methods formulate these tasks as sequence-to-sequence learning problems, which rely on large-scale parallel training corpus. In this paper, we present an iterative inplace editing approach for text revision, which requires no parallel data. In this approach, we simply fine-tune a pre-trained Transformer with masked language modeling and attribute classification. During inference, the editing at each iteration is realized by two-step span replacement. At the first step, the distributed representation of the text optimizes on the fly towards an attribute function. At the second step, a text span is masked and another new one is proposed conditioned on the optimized representation. The empirical experiments on two typical and important text revision tasks, text formalization and text simplification, show the effectiveness of our approach. It achieves competitive and even better performance than state-of-the-art supervised methods on text simplification, and gains better performance than strong unsupervised methods on text formalization.
null
null
10.18653/v1/2022.in2writing-1.7
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,594
inproceedings
gunser-etal-2022-pure
The Pure Poet: How Good is the Subjective Credibility and Stylistic Quality of Literary Short Texts Written with an Artificial Intelligence Tool as Compared to Texts Written by Human Authors?
Huang, Ting-Hao 'Kenneth' and Raheja, Vipul and Kang, Dongyeop and Chung, John Joon Young and Gissin, Daniel and Lee, Mina and Gero, Katy Ilonka
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.in2writing-1.8/
Gunser, Vivian Emily and Gottschling, Steffen and Brucker, Birgit and Richter, Sandra and {\c{C}}akir, D{\^i}lan Canan and Gerjets, Peter
Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022)
60--61
The application of artificial intelligence (AI) for text generation in creative domains raises questions regarding the credibility of AI-generated content. In two studies, we explored if readers can differentiate between AI-based and human-written texts (generated based on the first line of texts and poems of classic authors) and how the stylistic qualities of these texts are rated. Participants read 9 AI-based continuations and either 9 human-written continuations (Study 1
null
null
10.18653/v1/2022.in2writing-1.8
302). Participants' task was to decide whether a continuation was written with an AI-tool or not, to indicate their confidence in each decision, and to assess the stylistic text quality. Results showed that participants generally had low accuracy for differentiating between text types but were overconfident in their decisions. Regarding the assessment of stylistic quality, AI-continuations were perceived as less well-written, inspiring, fascinating, interesting, and aesthetic than both human-written and original continuations." }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,595
inproceedings
lee-etal-2022-interactive
Interactive Children`s Story Rewriting Through Parent-Children Interaction
Huang, Ting-Hao 'Kenneth' and Raheja, Vipul and Kang, Dongyeop and Chung, John Joon Young and Gissin, Daniel and Lee, Mina and Gero, Katy Ilonka
may
2022
Dublin, Ireland
Association for Computational Linguistics
https://aclanthology.org/2022.in2writing-1.9/
Lee, Yoonjoo and Kim, Tae Soo and Chang, Minsuk and Kim, Juho
Proceedings of the First Workshop on Intelligent and Interactive Writing Assistants (In2Writing 2022)
62--71
Storytelling in early childhood provides significant benefits in language and literacy development, relationship building, and entertainment. To maximize these benefits, it is important to empower children with more agency. Interactive story rewriting through parent-children interaction can boost children`s agency and help build the relationship between parent and child as they collaboratively create changes to an original story. However, for children with limited proficiency in reading and writing, parents must carry out multiple tasks to guide the rewriting process, which can incur a high cognitive load. In this work, we introduce an interface design that aims to support children and parents to rewrite stories together with the help of AI techniques. We describe three design goals determined by a review of prior literature in interactive storytelling and existing educational activities. We also propose a preliminary prompt-based pipeline that uses GPT-3 to realize the design goals and enable the interface.
null
null
10.18653/v1/2022.in2writing-1.9
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
25,596