entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | wu-etal-2022-towards-context | Towards In-Context Non-Expert Evaluation of Reflection Generation for Counselling Conversations | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.9/ | Wu, Zixiu and Balloccu, Simone and Helaoui, Rim and Recupero, Diego Reforgiato and Riboni, Daniele | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 116--124 | Reflection is an essential counselling strategy, where the therapist listens actively and responds with their own interpretation of the client`s words. Recent work leveraged pre-trained language models (PLMs) to approach reflection generation as a promising tool to aid counsellor training. However, those studies used limited dialogue context for modelling and simplistic error analysis for human evaluation. In this work, we take the first step towards addressing those limitations. First, we fine-tune PLMs on longer dialogue contexts for reflection generation. Then, we collect free-text error descriptions from non-experts about generated reflections, identify common patterns among them, and accordingly establish discrete error categories using thematic analysis. Based on this scheme, we plan for future work a mass non-expert error annotation phase for generated reflections followed by an expert-based validation phase, namely {\textquotedblleft}whether a coherent and consistent response is a good reflection{\textquotedblright}. | null | null | 10.18653/v1/2022.gem-1.9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,758 |
inproceedings | pisarevskaya-shavrina-2022-wikiomnia | {W}iki{O}mnia: filtration and evaluation of the generated {QA} corpus on the whole {R}ussian {W}ikipedia | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.10/ | Pisarevskaya, Dina and Shavrina, Tatiana | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 125--135 | The General QA field has been developing the methodology referencing the Stanford Question answering dataset (SQuAD) as the significant benchmark. Compiling factual questions datasets requires manual annotations, limiting the training data`s potential size. We present the WikiOmnia dataset, a new publicly available set of QA pairs and corresponding Russian Wikipedia article summary sections, composed with a fully automated generation and filtration pipeline. To ensure high quality of generated QA pairs, diverse manual and automated evaluation techniques were applied. The WikiOmnia pipeline is available open-source and is also tested for creating SQuAD-formatted QA on other domains, like news texts, fiction, and social media. The resulting dataset includes two parts: raw data on the whole Russian Wikipedia (7,930,873 QA pairs with paragraphs for ruGPT-3 XL and 7,991,040 QA pairs with paragraphs for ruT5-large) and cleaned data with strict automatic verification (over 160,000 QA pairs with paragraphs for ruGPT-3 XL and over 3,400,000 QA pairs with paragraphs for ruT5-large). | null | null | 10.18653/v1/2022.gem-1.10 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,759 |
inproceedings | mousavi-etal-2022-evaluation | Evaluation of Response Generation Models: Shouldn`t It Be Shareable and Replicable? | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.12/ | Mousavi, Seyed Mahed and Roccabruna, Gabriel and Lorandi, Michela and Caldarella, Simone and Riccardi, Giuseppe | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 136--147 | Human Evaluation (HE) of automatically generated responses is necessary for the advancement of human-machine dialogue research. Current automatic evaluation measures are poor surrogates, at best. There are no agreed-upon HE protocols and it is difficult to develop them. As a result, researchers either perform non-replicable, non-transparent and inconsistent procedures or, worse, limit themselves to automated metrics. We propose to standardize the human evaluation of response generation models by publicly sharing a detailed protocol. The proposal includes the task design, annotators recruitment, task execution, and annotation reporting. Such protocol and process can be used as-is, as-a-whole, in-part, or modified and extended by the research community. We validate the protocol by evaluating two conversationally fine-tuned state-of-the-art models (GPT-2 and T5) for the complex task of personalized response generation. We invite the community to use this protocol - or its future community amended versions - as a transparent, replicable, and comparable approach to HE of generated responses. | null | null | 10.18653/v1/2022.gem-1.12 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,760 |
inproceedings | calo-etal-2022-enhancing | Enhancing and Evaluating the Grammatical Framework Approach to Logic-to-Text Generation | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.13/ | Cal{\`o}, Eduardo and van der Werf, Elze and Gatt, Albert and van Deemter, Kees | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 148--171 | Logic-to-text generation is an important yet underrepresented area of natural language generation (NLG). In particular, most previous works on this topic lack sound evaluation. We address this limitation by building and evaluating a system that generates high-quality English text given a first-order logic (FOL) formula as input. We start by analyzing the performance of Ranta (2011)`s system. Based on this analysis, we develop an extended version of the system, which we name LoLa, that performs formula simplification based on logical equivalences and syntactic transformations. We carry out an extensive evaluation of LoLa using standard automatic metrics and human evaluation. We compare the results against a baseline and Ranta (2011)`s system. The results show that LoLa outperforms the other two systems in most aspects. | null | null | 10.18653/v1/2022.gem-1.13 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,761 |
inproceedings | jansen-etal-2022-controllable | Controllable Text Generation for All Ages: Evaluating a Plug-and-Play Approach to Age-Adapted Dialogue | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.14/ | Jansen, Lennert and Laichter, {\v{S}}t{\v{e}}p{\'a}n Lars and Sinclair, Arabella and van der Goot, Margot and Fernandez, Raquel and Pezzelle, Sandro | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 172--188 | To be trusted and perceived as natural and coherent, conversational systems must adapt to the language of their users. While personalized dialogue is a promising direction, controlling generation for fine-grained language features remains a challenge in this approach. A recent line of research showed the effectiveness of leveraging pre-trained language models toward adapting to a text`s topic or sentiment. In this study, we build on these approaches and focus on a higher-level dimension of language variation: speakers' age. We frame the task as a dialogue response generation, and test methods based on bag-of-words (BoW) and neural discriminators (Disc) to condition the output of GPT-2 and DialoGPT without altering the parameters of the language models. We show that Disc models achieve a higher degree of detectable control than BoW models based on automatic evaluation. In contrast, humans can partially detect age differences in BoW but not Disc responses. Since BoW responses are deemed better than Disc ones by humans, simple controllable methods thus appear to be a better tradeoff between adaptation and language quality. Our work confirms the challenges of adapting to higher-level dimensions of language variation. Moreover, it highlights the need to evaluate natural language generation thoroughly. | null | null | 10.18653/v1/2022.gem-1.14 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,762 |
inproceedings | li-lioma-2022-template | Template-based Contact Email Generation for Job Recommendation | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.15/ | Li, Qiuchi and Lioma, Christina | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 189--197 | Text generation has long been a popular research topic in NLP. However, the task of generating contact emails from recruiters to candidates in the job recommendation scenario has received little attention by the research community. This work aims at defining the topic of automatic email generation for job recommendation, identifying the challenges, and providing a baseline template-based solution for Danish jobs. Evaluation by human experts shows that our method is effective. We wrap up by discussing the future research directions for better solving this task. | null | null | 10.18653/v1/2022.gem-1.15 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,763 |
inproceedings | kumar-gangadharaiah-2022-abstractive | Are Abstractive Summarization Models truly {\textquoteleft}Abstractive'? An Empirical Study to Compare the two Forms of Summarization | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.17/ | Kumar, Vinayshekhar Bannihatti and Gangadharaiah, Rashmi | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 198--206 | Automatic Text Summarization has seen a large paradigm shift from extractive methods to abstractive (or generation-based) methods in the last few years. This can be attributed to the availability of large autoregressive language models that have been shown to outperform extractive methods. In this work, we revisit extractive methods and study their performance against state of the art(SOTA) abstractive models. Through extensive studies, we notice that abstractive methods are not yet completely abstractive in their generated summaries. In addition to this finding, we propose an evaluation metric that could benefit the summarization research community to measure the degree of abstractiveness of a summary in comparison to their extractive counterparts. To confirm the generalizability of our findings, we conduct experiments on two summarization datasets using five powerful techniques in extractive and abstractive summarization and study their levels of abstraction. | null | null | 10.18653/v1/2022.gem-1.17 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,764 |
inproceedings | lorincz-etal-2022-transfer | Transfer learning for multilingual vacancy text generation | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.18/ | Lorincz, Anna and Graus, David and Lavi, Dor and Pereira, Joao Lebre Magalhaes | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 207--222 | Writing job vacancies is a repetitive and expensive task for humans. This research focuses on automatically generating the benefit sections of vacancies at redacted from job attributes using mT5, the multilingual version of the state-of-the-art T5 transformer trained on general domains to generate texts in multiple languages. While transformers are accurate at generating coherent text, they are sometimes incorrect at including the structured data (the input) in the generated text. Including the input correctly is crucial for vacancy text generation; otherwise, the candidates may get misled. To evaluate how the model includes the input we developed our own domain-specific metrics (input generation accuracy). This was necessary, because Relation Generation, the pre-existing evaluation metric for data-to-text generation uses only string matching, which was not suitable for our dataset (due to the binary field). With the help of the new evaluation method we were able to measure how well the input is included in the generated text separately for different types of inputs (binary, categorical, numeric), offering another contribution to the field. Additionally, we also evaluated how accurate the mT5 model generates the text in the requested language. The results show that mT5 is very accurate at generating the text in the correct language, at including seen categorical inputs and binary values correctly in the generated text. However, mT5 performed worse when generating text from unseen city names or working with numeric inputs. Furthermore, we found that generating additional synthetic training data for the samples with numeric input can increase the input generation accuracy, however this only works when the numbers are integers and only cover a small range. | null | null | 10.18653/v1/2022.gem-1.18 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,765 |
inproceedings | liu-etal-2022-plug | Plug-and-Play Recipe Generation with Content Planning | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.19/ | Liu, Yinhong and Su, Yixuan and Shareghi, Ehsan and Collier, Nigel | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 223--234 | Recent pre-trained language models have shown promising capability to generate fluent and realistic natural text. However, generating multi-sentence text with global content planning has been a long-existing research question. The current controlled text generation models cannot directly address this issue, as they usually condition on single known control attribute. We propose a low-cost yet effective framework that explicitly models content plans and optimizes the joint distribution of the natural sequence and the content plans in a plug-and-play post-processing manner. We evaluate our model with extensive automatic metrics and human evaluations and show that it achieves the state-of-the-art performance on the recipe generation task on Recipe1M+ dataset. | null | null | 10.18653/v1/2022.gem-1.19 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,766 |
inproceedings | huang-etal-2022-towards | Towards Attribute-Entangled Controllable Text Generation: A Pilot Study of Blessing Generation | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.20/ | Huang, Shulin and Ma, Shirong and Li, Yinghui and Yangning, Li and Lin, Shiyang and Zheng, Haitao and Shen, Ying | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 235--247 | Controllable Text Generation (CTG) has obtained great success due to its fine-grained generation ability obtained by focusing on multiple attributes. However, most existing CTG researches overlook how to utilize the attribute entanglement to enhance the diversity of the controlled generated texts. Facing this dilemma, we focus on a novel CTG scenario, i.e., blessing generation which is challenging because high-quality blessing texts require CTG models to comprehensively consider the entanglement between multiple attributes (e.g., objects and occasions). To promote the research on blessing generation, we present EBleT, a large-scale Entangled Blessing Text dataset containing 293K English sentences annotated with multiple attributes. Furthermore, we propose novel evaluation metrics to measure the quality of the blessing texts generated by the baseline models we designed. Our study opens a new research direction for controllable text generation and enables the development of attribute-entangled CTG models. | null | null | 10.18653/v1/2022.gem-1.20 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,767 |
inproceedings | marfurt-henderson-2022-unsupervised | Unsupervised Token-level Hallucination Detection from Summary Generation By-products | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.21/ | Marfurt, Andreas and Henderson, James | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 248--261 | Hallucinations in abstractive summarization are model generations that are unfaithful to the source document. Current methods for detecting hallucinations operate mostly on noun phrases and named entities, and restrict themselves to the XSum dataset, which is known to have hallucinations in 3 out of 4 training examples (Maynez et al., 2020). We instead consider the CNN/DailyMail dataset where the summarization model has not seen abnormally many hallucinations during training. We automatically detect candidate hallucinations at the token level, irrespective of its part of speech. Our detection comes essentially for free, as we only use information the model already produces during generation of the summary. This enables practitioners to jointly generate a summary and identify possible hallucinations, with minimal overhead. We repurpose an existing factuality dataset and create our own token-level annotations. The evaluation on these two datasets shows that our model achieves better precision-recall tradeoffs than its competitors, which additionally require a model forward pass. | null | null | 10.18653/v1/2022.gem-1.21 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,768 |
inproceedings | marfurt-etal-2022-corpus | A Corpus and Evaluation for Predicting Semi-Structured Human Annotations | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.22/ | Marfurt, Andreas and Thornton, Ashley and Sylvan, David and van der Plas, Lonneke and Henderson, James | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 262--275 | A wide variety of tasks have been framed as text-to-text tasks to allow processing by sequence-to-sequence models. We propose a new task of generating a semi-structured interpretation of a source document. The interpretation is semi-structured in that it contains mandatory and optional fields with free-text information. This structure is surfaced by human annotations, which we standardize and convert to text format. We then propose an evaluation technique that is generally applicable to any such semi-structured annotation, called equivalence classes evaluation. The evaluation technique is efficient and scalable; it creates a large number of evaluation instances from a comparably cheap clustering of the free-text information by domain experts. For our task, we release a dataset about the monetary policy of the Federal Reserve. On this corpus, our evaluation shows larger differences between pretrained models than standard text generation metrics. | null | null | 10.18653/v1/2022.gem-1.22 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,769 |
inproceedings | arcadinho-etal-2022-t5ql | {T}5{QL}: Taming language models for {SQL} generation | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.23/ | Arcadinho, Samuel David and Aparicio, David and Veiga, Hugo and Alegria, Antonio | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 276--286 | Automatic SQL generation has been an active research area, aiming at streamlining the access to databases by writing natural language with the given intent instead of writing SQL. Current SOTA methods for semantic parsing depend on LLMs to achieve high predictive accuracy on benchmark datasets. This reduces their applicability, since LLMs requires expensive GPUs. Furthermore, SOTA methods are ungrounded and thus not guaranteed to always generate valid SQL. Here we propose T5QL, a new SQL generation method that improves the performance in benchmark datasets when using smaller LMs, namely T5-Base, by 13pp when compared against SOTA methods. Additionally, T5QL is guaranteed to always output valid SQL using a context-free grammar to constrain SQL generation. Finally, we show that dividing semantic parsing in two tasks, candidate SQLs generation and candidate re-ranking, is a promising research avenue that can reduce the need for large LMs. | null | null | 10.18653/v1/2022.gem-1.23 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,770 |
inproceedings | kovalchuk-etal-2022-human | Human perceiving behavior modeling in evaluation of code generation models | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.24/ | Kovalchuk, Sergey V. and Lomshakov, Vadim and Aliev, Artem | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 287--294 | Within this study, we evaluated a series of code generation models based on CodeGen and GPTNeo to compare the metric-based performance and human evaluation. For a deeper analysis of human perceiving within the evaluation procedure we`ve implemented a 5-level Likert scale assessment of the model output using a perceiving model based on the Theory of Planned Behavior (TPB). Through such analysis, we showed an extension of model assessment as well as a deeper understanding of the quality and applicability of generated code for practical question answering. The approach was evaluated with several model settings in order to assess diversity in quality and style of answer. With the TPB-based model, we showed a different level of perceiving the model result, namely personal understanding, agreement level, and readiness to use the particular code. With such analysis, we investigate a series of issues in code generation as natural language generation (NLG) problems observed in a practical context of programming question-answering with code. | null | null | 10.18653/v1/2022.gem-1.24 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,771 |
inproceedings | trotta-etal-2022-nearest | Nearest Neighbor Language Models for Stylistic Controllable Generation | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.25/ | Trotta, Severino and Flek, Lucie and Welch, Charles | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 295--305 | Recent language modeling performance has been greatly improved by the use of external memory. This memory encodes the context so that similar contexts can be recalled during decoding. This similarity depends on how the model learns to encode context, which can be altered to include other attributes, such as style. We construct and evaluate an architecture for this purpose, using corpora annotated for politeness, formality, and toxicity. Through extensive experiments and human evaluation we demonstrate the potential of our method to generate text while controlling style. We find that style-specific datastores improve generation performance, though results vary greatly across styles, and the effect of pretraining data and specific styles should be explored in future work. | null | null | 10.18653/v1/2022.gem-1.25 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,772 |
inproceedings | popovic-belz-2022-reporting | On reporting scores and agreement for error annotation tasks | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.26/ | Popovi{\'c}, Maja and Belz, Anya | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 306--315 | This work examines different ways of aggregating scores for error annotation in MT outputs: raw error counts, error counts normalised over total number of words (word percentage'), and error counts normalised over total number of errors (error percentage'). We use each of these three scores to calculate inter-annotator agreement in the form of Krippendorff`s $alpha$ and Pearson`s $r$ and compare the obtained numbers, overall and separately for different types of errors. While each score has its advantages depending on the goal of the evaluation, we argue that the best way of estimating inter-annotator agreement using such numbers are raw counts. If the annotation process ensures that the total number of words cannot differ among the annotators (for example, due to adding omission symbols), normalising over number of words will lead to the same conclusions. In contrast, total number of errors is very subjective because different annotators often perceive different amount of errors in the same text, therefore normalising over this number can indicate lower agreements. | null | null | 10.18653/v1/2022.gem-1.26 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,773 |
inproceedings | gupta-etal-2022-answerability | Answerability: A custom metric for evaluating chatbot performance | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.27/ | Gupta, Pranav and Rajasekar, Anand A. and Patel, Amisha and Kulkarni, Mandar and Sunell, Alexander and Kim, Kyung and Ganapathy, Krishnan and Trivedi, Anusua | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 316--325 | Most commercial conversational AI products in domains spanning e-commerce, health care, finance, and education involve a hierarchy of NLP models that perform a variety of tasks such as classification, entity recognition, question-answering, sentiment detection, semantic text similarity, and so on. Despite our understanding of each of the constituent models, we do not have a clear view as to how these models affect the overall platform metrics. To bridge this gap, we define a metric known as answerability, which penalizes not only irrelevant or incorrect chatbot responses but also unhelpful responses that do not serve the chatbot`s purpose despite being correct or relevant. Additionally, we describe a formula-based mathematical framework to relate individual model metrics to the answerability metric. We also describe a modeling approach for predicting a chatbot`s answerability to a user question and its corresponding chatbot response. | null | null | 10.18653/v1/2022.gem-1.27 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,774 |
inproceedings | phillips-etal-2022-improved | Improved Evaluation of Automatic Source Code Summarisation | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.28/ | Phillips, Jesse and Bowes, David and El-Haj, Mahmoud and Hall, Tracy | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 326--335 | Source code summaries are a vital tool for the understanding and maintenance of source code as they can be used to explain code in simple terms. However, source code with missing, incorrect, or outdated summaries is a common occurrence in production code. Automatic source code summarisation seeks to solve these issues by generating up-to-date summaries of source code methods. Recent work in automatically generating source code summaries uses neural networks for generating summaries; commonly Sequence-to-Sequence or Transformer models, pretrained on method-summary pairs. The most common method of evaluating the quality of these summaries is comparing the machine-generated summaries against human-written summaries. Summaries can be evaluated using n-gram-based translation metrics such as BLEU, METEOR, or ROUGE-L. However, these metrics alone can be unreliable and new Natural Language Generation metrics based on large pretrained language models provide an alternative. In this paper, we propose a method of improving the evaluation of a model by improving the preprocessing of the data used to train it, as well as proposing evaluating the model with a metric based off a language model, pretrained on a Natural Language (English) alongside traditional metrics. Our evaluation suggests our model has been improved by cleaning and preprocessing the data used in model training. The addition of a pretrained language model metric alongside traditional metrics shows that both produce results which can be used to evaluate neural source code summarisation. | null | null | 10.18653/v1/2022.gem-1.28 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,775 |
inproceedings | howcroft-gkatzia-2022-nlg | Most {NLG} is Low-Resource: here`s what we can do about it | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.29/ | Howcroft, David M. and Gkatzia, Dimitra | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 336--350 | Many domains and tasks in natural language generation (NLG) are inherently {\textquoteleft}low-resource', where training data, tools and linguistic analyses are scarce. This poses a particular challenge to researchers and system developers in the era of machine-learning-driven NLG. In this position paper, we initially present the challenges researchers {\&} developers often encounter when dealing with low-resource settings in NLG. We then argue that it is unsustainable to collect large aligned datasets or build large language models from scratch for every possible domain due to cost, labour, and time constraints, so researching and developing methods and resources for low-resource settings is vital. We then discuss current approaches to low-resource NLG, followed by proposed solutions and promising avenues for future work in NLG for low-resource settings. | null | null | 10.18653/v1/2022.gem-1.29 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,776 |
inproceedings | asaadi-etal-2022-giccs | {G}i{CCS}: A {G}erman in-Context Conversational Similarity Benchmark | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.30/ | Asaadi, Shima and Kolagar, Zahra and Liebel, Alina and Zarcone, Alessandra | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 351--362 | The Semantic textual similarity (STS) task is commonly used to evaluate the semantic representations that language models (LMs) learn from texts, under the assumption that good-quality representations will yield accurate similarity estimates. When it comes to estimating the similarity of two utterances in a dialogue, however, the conversational context plays a particularly important role. We argue for the need of benchmarks specifically created using conversational data in order to evaluate conversational LMs in the STS task. We introduce GiCCS, a first conversational STS evaluation benchmark for German. We collected the similarity annotations for GiCCS using best-worst scaling and presenting the target items in context, in order to obtain highly-reliable context-dependent similarity scores. We present benchmarking experiments for evaluating LMs on capturing the similarity of utterances. Results suggest that pretraining LMs on conversational data and providing conversational context can be useful for capturing similarity of utterances in dialogues. GiCCS will be publicly available to encourage benchmarking of conversational LMs. | null | null | 10.18653/v1/2022.gem-1.30 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,777 |
inproceedings | clive-etal-2022-control | Control Prefixes for Parameter-Efficient Text Generation | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.31/ | Clive, Jordan and Cao, Kris and Rei, Marek | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 363--382 | Prefix-tuning is a parameter-efficient and powerful technique for adapting a pre-trained language model to a downstream application. However, it uses the same dataset-level tuned set of parameters for all examples in the dataset. We extend the framework with a dynamic method, Control Prefixes, which allows for the effective inclusion of input-dependent information, thereby demonstrating how prefix-tuning can be used for controlled text generation tasks. The method incorporates attribute-level learnable representations into different layers of a pre-trained Transformer, enabling the generated text to be guided in a particular direction. We provide a systematic evaluation of the technique and apply it to five datasets from the GEM benchmark for natural language generation (NLG). Using only 0.1{--}2{\%} additional trainable parameters, we show Control Prefixes can even outperform full fine-tuning methods, and present state-of-the-art results on several data-to-text datasets, including WebNLG. We also examine the common case where input-dependent information is unavailable at test time and show Control Prefixes can excel in this setting also. | null | null | 10.18653/v1/2022.gem-1.31 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,778 |
inproceedings | huidrom-belz-2022-survey | A Survey of Recent Error Annotation Schemes for Automatically Generated Text | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.33/ | Huidrom, Rudali and Belz, Anya | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 383--398 | While automatically computing numerical scores remains the dominant paradigm in NLP system evaluation, error analysis is receiving increasing attention, with numerous error annotation schemes being proposed for automatically generated text. However, there is little agreement about what error annotation schemes should look like, how many different types of errors should be distinguished and at what level of granularity. In this paper, our aim is to map out recent work on annotating errors in automatically generated text, with a particular focus on error taxonomies. We describe our systematic paper selection process, and survey the error annotation schemes reported in the papers, drawing out similarities and differences between them. Finally, we characterise the issues that would make it difficult to move from the current situation to a standardised error taxonomy for annotating errors in automatically generated text. | null | null | 10.18653/v1/2022.gem-1.33 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,779 |
inproceedings | casola-etal-2022-whats | What`s in a (dataset`s) name? The case of {B}ig{P}atent | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.34/ | Casola, Silvia and Lavelli, Alberto and Saggion, Horacio | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 399--404 | Sharing datasets and benchmarks has been crucial for rapidly improving Natural Language Processing models and systems. Documenting datasets' characteristics (and any modification introduced over time) is equally important to avoid confusion and make comparisons reliable. Here, we describe the case of BigPatent, a dataset for patent summarization that exists in at least two rather different versions under the same name. While previous literature has not clearly distinguished among versions, their differences do not only lay on a surface level but also modify the dataset`s core nature and, thus, the complexity of the summarization task. While this paper describes a specific case, we aim to shed light on new challenges that might emerge in resource sharing and advocate for comprehensive documentation of datasets and models. | null | null | 10.18653/v1/2022.gem-1.34 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,780 |
inproceedings | kour-etal-2022-measuring | Measuring the Measuring Tools: An Automatic Evaluation of Semantic Metrics for Text Corpora | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.35/ | Kour, George and Ackerman, Samuel and Farchi, Eitan Daniel and Raz, Orna and Carmeli, Boaz and Tavor, Ateret Anaby | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 405--416 | Similarity metrics for text corpora are becoming critical due to the tremendous growth in the number of generative models. These similarity metrics measure the semantic gap between human and machine-generated text on the corpus level. However, standard methods for evaluating the characteristics of these metrics have yet to be established. We propose a set of automatic measures for evaluating the characteristics of semantic similarity metrics for text corpora. Our measures allow us to sensibly compare and identify the strengths and weaknesses of these metrics. We demonstrate the effectiveness of our evaluation measures in capturing fundamental characteristics by comparing it to a collection of classical and state-of-the-art metrics. Our measures revealed that recent metrics are becoming better in identifying semantic distributional mismatch while classical metrics are more sensitive to perturbations in the surface text levels. | null | null | 10.18653/v1/2022.gem-1.35 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,781 |
inproceedings | blackburn-2022-multilingual | Multilingual Social Media Text Generation and Evaluation with Few-Shot Prompting | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.39/ | Blackburn, Mack | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 417--427 | This work adapts large language models to generate multilingual social media text that meets several objectives simultaneously: topic relevance, author style consistency, and reply validity. Leveraging existing online information behavior simulators, which currently only forecast activities but not content, our approach comprised of generalizable prompt formation and efficient evaluation to produce a believable, personalized, and responsive synthetic social network. According to some preliminary experiments, our multi-objective prompt formation and automatic evaluation/selection methods are able to yield a significant number of high-quality synthetic texts according to both standardized and trained metrics. | null | null | 10.18653/v1/2022.gem-1.39 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,782 |
inproceedings | ridenour-etal-2022-assessing | Assessing Inter-metric Correlation for Multi-document Summarization Evaluation | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.40/ | Ridenour, Michael and Agrawal, Ameeta and Olabisi, Olubusayo | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 428--438 | Recent advances in automatic text summarization have contemporaneously been accompanied by a great deal of new metrics of automatic evaluation. This in turn has inspired recent research to re-assess these evaluation metrics to see how well they correlate with each other as well as with human evaluation, mostly focusing on single-document summarization (SDS) tasks. Although many of these metrics are typically also used for evaluating multi-document summarization (MDS) tasks, so far, little attention has been paid to studying them under such a distinct scenario. To address this gap, we present a systematic analysis of the inter-metric correlations for MDS tasks, while comparing and contrasting the results with SDS models. Using datasets from a wide range of domains (news, peer reviews, tweets, dialogues), we thus study a unified set of metrics under both the task setups. Our empirical analysis suggests that while most reference-based metrics show fairly similar trends across both multi- and single-document summarization, there is a notable lack of correlation between reference-free metrics in multi-document summarization tasks. | null | null | 10.18653/v1/2022.gem-1.40 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,783 |
inproceedings | lee-etal-2022-factual | Factual Error Correction for Abstractive Summaries Using Entity Retrieval | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.41/ | Lee, Hwanhee and Park, Cheoneum and Yoon, Seunghyun and Bui, Trung and Dernoncourt, Franck and Kim, Juae and Jung, Kyomin | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 439--444 | Despite the recent advancements in abstractive summarization systems leveraged from large-scale datasets and pre-trained language models, the factual correctness of the summary is still insufficient. One line of trials to mitigate this problem is to include a post-editing process that can detect and correct factual errors in the summary. In building such a system, it is strongly required that 1) the process has a high success rate and interpretability and 2) it has a fast running time. Previous approaches focus on the regeneration of the summary, resulting in low interpretability and high computing resources. In this paper, we propose an efficient factual error correction system RFEC based on entity retrieval. RFEC first retrieves the evidence sentences from the original document by comparing the sentences with the target summary to reduce the length of the text to analyze. Next, RFEC detects entity-level errors in the summaries using the evidence sentences and substitutes the wrong entities with the accurate entities from the evidence sentences. Experimental results show that our proposed error correction system shows more competitive performance than baseline methods in correcting factual errors with a much faster speed. | null | null | 10.18653/v1/2022.gem-1.41 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,784 |
inproceedings | chen-etal-2022-coherent | Coherent Long Text Generation by Contrastive Soft Prompt | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.42/ | Chen, Guandan and Pu, Jiashu and Xi, Yadong and Zhang, Rongsheng | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 445--455 | Improving the coherence of long text generation is an important but challenging task. Existing models still struggle to generate a logical and coherent sentence sequence. It is difficult for a model to plan long text generation and avoid generating incoherent texts from a high-level semantic perspective. We speculate that this is due to two factors: (1) current training methods mainly rely on maximum likelihood estimation computed from token-level probability prediction; (2) the role of incoherent texts has been largely under-explored, thus the noised generated texts with errors are out-of-distribution for the model. To address these issues, in this paper, we propose a Contrastive Soft Prompt (CSP) model for improving the coherence of long text generation. It learns text representations in the hidden space for better planning long text generation. To this end, it jointly learns to generate a text representation close to representations of coherent texts and away from incoherent ones, and then generate long text taking this representation as the soft prompt. We conduct experiments on two public story generation datasets, and experiment results show that our method can generate more coherent stories than the state-of-the-art model. | null | null | 10.18653/v1/2022.gem-1.42 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,785 |
inproceedings | sundararajan-etal-2022-error | Error Analysis of {T}o{TT}o Table-to-Text Neural {NLG} Models | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.43/ | Sundararajan, Barkavi and Sripada, Somayajulu and Reiter, Ehud | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 456--470 | We report error analysis of outputs from seven Table-to-Text generation models fine-tuned on ToTTo, an open-domain English language dataset. A manual error annotation of a subset of outputs (a total of 5,278 sentences) belonging to the topic of Politics generated by these seven models has been carried out. Our error annotation focused on eight categories of errors. The error analysis shows that more than 45{\%} of sentences from each of the seven models have been error-free. It uncovered some of the specific classes of errors such as WORD errors that are the dominant errors in all the seven models, NAME and NUMBER errors are more committed by two of the GeM benchmark models, whereas DATE-DIMENSION and OTHER category of errors are more common in our Table-to-Text models. | null | null | 10.18653/v1/2022.gem-1.43 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,786 |
inproceedings | mahajan-etal-2022-improving | Improving Dialogue Act Recognition with Augmented Data | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.44/ | Mahajan, Khyati and Parikh, Soham and Vohra, Quaizar and Tiwari, Mitul and Shaikh, Samira | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 471--479 | We present our work on augmenting dialog act recognition capabilities utilizing synthetically generated data. Our work is motivated by the limitations of current dialog act datasets, and the need to adapt for new domains as well as ambiguity in utterances written by humans. We list our observations and findings towards how synthetically generated data can contribute meaningfully towards more robust dialogue act recognition models extending to new domains. Our major finding shows that synthetic data, which is linguistically varied, can be very useful towards this goal and increase the performance from (0.39, 0.16) to (0.85, 0.88) for AFFIRM and NEGATE dialog acts respectively. | null | null | 10.18653/v1/2022.gem-1.44 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,787 |
inproceedings | ilinykh-dobnik-2022-decoding | Do Decoding Algorithms Capture Discourse Structure in Multi-Modal Tasks? A Case Study of Image Paragraph Generation | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.45/ | Ilinykh, Nikolai and Dobnik, Simon | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 480--493 | This paper describes insights into how different inference algorithms structure discourse in image paragraphs. We train a multi-modal transformer and compare 11 variations of decoding algorithms. We propose to evaluate image paragraphs not only with standard automatic metrics, but also with a more extensive, {\textquotedblleft}under the hood{\textquotedblright} analysis of the discourse formed by sentences. Our results show that while decoding algorithms can be unfaithful to the reference texts, they still generate grounded descriptions, but they also lack understanding of the discourse structure and differ from humans in terms of attentional structure over images. | null | null | 10.18653/v1/2022.gem-1.45 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,788 |
inproceedings | de-bruyn-etal-2022-20q | 20{Q}: Overlap-Free World Knowledge Benchmark for Language Models | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.46/ | De Bruyn, Maxime and Lotfi, Ehsan and Buhmann, Jeska and Daelemans, Walter | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 494--508 | What do language models know about our world? This question is hard to answer but important to get right. To this end, we introduce 20Q, a novel benchmark using the Twenty Questions game to evaluate world knowledge and common sense of language models. Thanks to our overlap-free benchmark, language models learn the game of Twenty Questions without learning relevant knowledge for the test set. We uncover two intuitive factors influencing the world knowledge of language models: the size of the model and the topic frequency in the pre-training data. Moreover, we show that in-context learning is inefficient for evaluating language models' world knowledge {---} fine-tuning is necessary to show their true capabilities. Lastly, our results show room for improvement to enhance the world knowledge and common sense of large language models. A potential solution would be to up-sample unfrequent topics in the pre-training of language models. | null | null | 10.18653/v1/2022.gem-1.46 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,789 |
inproceedings | lotfi-etal-2022-name | What Was Your Name Again? Interrogating Generative Conversational Models For Factual Consistency Evaluation | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.47/ | Lotfi, Ehsan and De Bruyn, Maxime and Buhmann, Jeska and Daelemans, Walter | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 509--519 | Generative conversational agents are known to suffer from problems like inconsistency and hallucination, and a big challenge in studying these issues remains evaluation: they are not properly reflected in common text generation metrics like perplexity or BLEU, and alternative implicit methods like semantic similarity or NLI labels can be misguided when few specific tokens are decisive. In this work we propose ConsisTest; a factual consistency benchmark including both WH and Y/N questions based on PersonaChat, along with a hybrid evaluation pipeline which aims to get the best of symbolic and sub-symbolic methods. Using these and focusing on pretrained generative models like BART, we provide detailed statistics and analysis on how the model`s consistency is affected by variations in question and context. | null | null | 10.18653/v1/2022.gem-1.47 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,790 |
inproceedings | kalbaliyev-sirts-2022-narrative | Narrative Why-Question Answering: A Review of Challenges and Datasets | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.48/ | Kalbaliyev, Emil and Sirts, Kairit | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 520--530 | Narrative Why-Question Answering is an important task to assess the causal reasoning ability of systems in narrative settings. Further progress in this domain needs clear identification of challenges related to understanding the causal structure of narration. In this paper, we give an overview of the challenges related to both narrative understanding and why-question answering, because Narrative Why-Question Answering combines the characteristics of these domains. We also identify narrative QA datasets containing why-questions and analyze their characteristics through the lens of these challenges. | null | null | 10.18653/v1/2022.gem-1.48 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,791 |
inproceedings | sobrevilla-cabezudo-pardo-2022-exploring | Exploring a {POS}-based Two-stage Approach for Improving Low-Resource {AMR}-to-Text Generation | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.49/ | Sobrevilla Cabezudo, Marco Antonio and Pardo, Thiago | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 531--538 | This work presents a two-stage approach for tackling low-resource AMR-to-text generation for Brazilian Portuguese. Our approach consists of (1) generating a masked surface realization in which some tokens are masked according to its Part-of-Speech class and (2) infilling the masked tokens according to the AMR graph and the previous masked surface realization. Results show a slight improvement over the baseline, mainly in BLEU (1.63) and METEOR (0.02) scores. Moreover, we evaluate the pipeline components separately, showing that the bottleneck of the pipeline is the masked surface realization. Finally, the human evaluation suggests that models still suffer from hallucinations, and some strategies to deal with the problems found are proposed. | null | null | 10.18653/v1/2022.gem-1.49 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,792 |
inproceedings | keymanesh-etal-2022-makes | What Makes Data-to-Text Generation Hard for Pretrained Language Models? | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.50/ | Keymanesh, Moniba and Benton, Adrian and Dredze, Mark | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 539--554 | Expressing natural language descriptions of structured facts or relations {--} data-to-text generation (D2T) {--} increases the accessibility of structured knowledge repositories. Previous work shows that pre-trained language models (PLMs) perform remarkably well on this task after fine-tuning on a significant amount of task-specific training data. On the other hand, while auto-regressive PLMs can generalize from a few task examples, their efficacy at D2T is largely unexplored. Furthermore, we have an incomplete understanding of the limits of PLMs on D2T. In this work, we conduct an empirical study of both fine-tuned and auto-regressive PLMs on the DART multi-domain D2T dataset. We consider their performance as a function of the amount of task-specific data and how the data is incorporated into the models: zero and few-shot learning, and fine-tuning of model weights. In addition, we probe the limits of PLMs by measuring performance on subsets of the evaluation data: novel predicates and abstractive test examples. To improve the performance on these subsets, we investigate two techniques: providing predicate descriptions in the context and re-ranking generated candidates by information reflected in the source. Finally, we conduct a human evaluation of model errors and show that D2T generation tasks would benefit from datasets with more careful manual curation. | null | null | 10.18653/v1/2022.gem-1.50 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,793 |
inproceedings | king-etal-2022-dont | Don`t Say What You Don`t Know: Improving the Consistency of Abstractive Summarization by Constraining Beam Search | Bosselut, Antoine and Chandu, Khyathi and Dhole, Kaustubh and Gangal, Varun and Gehrmann, Sebastian and Jernite, Yacine and Novikova, Jekaterina and Perez-Beltrachini, Laura | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.gem-1.51/ | King, Daniel and Shen, Zejiang and Subramani, Nishant and Weld, Daniel S. and Beltagy, Iz and Downey, Doug | Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) | 555--571 | Abstractive summarization systems today produce fluent and relevant output, but often {\textquotedblleft}hallucinate{\textquotedblright} statements not supported by the source text. We analyze the connection between hallucinations and training data, and find evidence that models hallucinate because they train on target summaries that are unsupported by the source. Based on our findings, we present PINOCCHIO, a new decoding method that improves the consistency of a transformer-based abstractive summarizer by constraining beam search to avoid hallucinations. Given the model states and outputs at a given step, PINOCCHIO detects likely model hallucinations based on various measures of attribution to the source text. PINOCCHIO backtracks to find more consistent output, and can opt to produce no summary at all when no consistent generation can be found. In experiments, we find that PINOCCHIO improves the consistency of generation by an average of 67{\%} on two abstractive summarization datasets, without hurting recall. | null | null | 10.18653/v1/2022.gem-1.51 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,794 |
inproceedings | maronikolakis-etal-2022-analyzing | Analyzing Hate Speech Data along Racial, Gender and Intersectional Axes | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.1/ | Maronikolakis, Antonis and Baader, Philip and Sch{\"utze, Hinrich | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 1--7 | To tackle the rising phenomenon of hate speech, efforts have been made towards data curation and analysis. When it comes to analysis of bias, previous work has focused predominantly on race. In our work, we further investigate bias in hate speech datasets along racial, gender and intersectional axes. We identify strong bias against African American English (AAE), masculine and AAE+Masculine tweets, which are annotated as disproportionately more hateful and offensive than from other demographics. We provide evidence that BERT-based models propagate this bias and show that balancing the training data for these protected attributes can lead to fairer models with regards to gender, but not race. | null | null | 10.18653/v1/2022.gebnlp-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,796 |
inproceedings | li-etal-2022-analysis | Analysis of Gender Bias in Social Perception and Judgement Using {C}hinese Word Embeddings | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.2/ | Li, Jiali and Zhu, Shucheng and Liu, Ying and Liu, Pengyuan | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 8--16 | Gender is a construction in line with social perception and judgment. An important means of this construction is through languages. When natural language processing tools, such as word embeddings, associate gender with the relevant categories of social perception and judgment, it is likely to cause bias and harm to those groups that do not conform to the mainstream social perception and judgment. Using 12,251 Chinese word embeddings as intermedium, this paper studies the relationship between social perception and judgment categories and gender. The results reveal that these grammatical gender-neutral Chinese word embeddings show a certain gender bias, which is consistent with the mainstream society`s perception and judgment of gender. Men are judged by their actions and perceived as bad, easily-disgusted, bad-tempered and rational roles while women are judged by their appearances and perceived as perfect, either happy or sad, and emotional roles. | null | null | 10.18653/v1/2022.gebnlp-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,797 |
inproceedings | limisiewicz-marecek-2022-dont | Don`t Forget About Pronouns: Removing Gender Bias in Language Models Without Losing Factual Gender Information | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.3/ | Limisiewicz, Tomasz and Mare{\v{c}}ek, David | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 17--29 | The representations in large language models contain multiple types of gender information. We focus on two types of such signals in English texts: factual gender information, which is a grammatical or semantic property, and gender bias, which is the correlation between a word and specific gender. We can disentangle the model`s embeddings and identify components encoding both types of information with probing. We aim to diminish the stereotypical bias in the representations while preserving the factual gender signal. Our filtering method shows that it is possible to decrease the bias of gender-neutral profession names without significant deterioration of language modeling capabilities. The findings can be applied to language generation to mitigate reliance on stereotypes while preserving gender agreement in coreferences. | null | null | 10.18653/v1/2022.gebnlp-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,798 |
inproceedings | havens-etal-2022-uncertainty | Uncertainty and Inclusivity in Gender Bias Annotation: An Annotation Taxonomy and Annotated Datasets of {B}ritish {E}nglish Text | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.4/ | Havens, Lucy and Terras, Melissa and Bach, Benjamin and Alex, Beatrice | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 30--57 | Mitigating harms from gender biased language in Natural Language Processing (NLP) systems remains a challenge, and the situated nature of language means bias is inescapable in NLP data. Though efforts to mitigate gender bias in NLP are numerous, they often vaguely define gender and bias, only consider two genders, and do not incorporate uncertainty into models. To address these limitations, in this paper we present a taxonomy of gender biased language and apply it to create annotated datasets. We created the taxonomy and annotated data with the aim of making gender bias in language transparent. If biases are communicated clearly, varieties of biased language can be better identified and measured. Our taxonomy contains eleven types of gender biases inclusive of people whose gender expressions do not fit into the binary conceptions of woman and man, and whose gender differs from that they were assigned at birth, while also allowing annotators to document unknown gender information. The taxonomy and annotated data will, in future work, underpin analysis and more equitable language model development. | null | null | 10.18653/v1/2022.gebnlp-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,799 |
inproceedings | li-etal-2022-debiasing | Debiasing Neural Retrieval via In-batch Balancing Regularization | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.5/ | Li, Yuantong and Wei, Xiaokai and Wang, Zijian and Wang, Shen and Bhatia, Parminder and Ma, Xiaofei and Arnold, Andrew | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 58--66 | People frequently interact with information retrieval (IR) systems, however, IR models exhibit biases and discrimination towards various demographics. The in-processing fair ranking methods provides a trade-offs between accuracy and fairness through adding a fairness-related regularization term in the loss function. However, there haven`t been intuitive objective functions that depend on the click probability and user engagement to directly optimize towards this. In this work, we propose the \textbf{I}n-\textbf{B}atch \textbf{B}alancing \textbf{R}egularization (IBBR) to mitigate the ranking disparity among subgroups. In particular, we develop a differentiable \textbf{normed Pairwise Ranking Fairness} (nPRF) and leverage the T-statistics on top of nPRF over subgroups as a regularization to improve fairness. Empirical results with the BERT-based neural rankers on the MS MARCO Passage Retrieval dataset with the human-annotated non-gendered queries benchmark (CITATION) show that our IBBR method with nPRF achieves significantly less bias with minimal degradation in ranking performance compared with the baseline. | null | null | 10.18653/v1/2022.gebnlp-1.5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,800 |
inproceedings | joniak-aizawa-2022-gender | Gender Biases and Where to Find Them: Exploring Gender Bias in Pre-Trained Transformer-based Language Models Using Movement Pruning | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.6/ | Joniak, Przemyslaw and Aizawa, Akiko | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 67--73 | Language model debiasing has emerged as an important field of study in the NLP community. Numerous debiasing techniques were proposed, but bias ablation remains an unaddressed issue. We demonstrate a novel framework for inspecting bias in pre-trained transformer-based language models via movement pruning. Given a model and a debiasing objective, our framework finds a subset of the model containing less bias than the original model. We implement our framework by pruning the model while fine-tuning it on the debasing objective. Optimized are only the pruning scores {--} parameters coupled with the model`s weights that act as gates. We experiment with pruning attention heads, an important building block of transformers: we prune square blocks, as well as establish a new way of pruning the entire heads. Lastly, we demonstrate the usage of our framework using gender bias, and based on our findings, we propose an improvement to an existing debiasing method. Additionally, we re-discover a bias-performance trade-off: the better the model performs, the more bias it contains. | null | null | 10.18653/v1/2022.gebnlp-1.6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,801 |
inproceedings | parasurama-sedoc-2022-gendered | Gendered Language in Resumes and its Implications for Algorithmic Bias in Hiring | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.7/ | Parasurama, Prasanna and Sedoc, Jo{\~a}o | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 74--74 | Despite growing concerns around gender bias in NLP models used in algorithmic hiring, there is little empirical work studying the extent and nature of gendered language in resumes. Using a corpus of 709k resumes from IT firms, we train a series of models to classify the gender of the applicant, thereby measuring the extent of gendered information encoded in resumes. We also investigate whether it is possible to obfuscate gender from resumes by removing gender identifiers, hobbies, gender sub-space in embedding models, etc. We find that there is a significant amount of gendered information in resumes even after obfuscation.A simple Tf-Idf model can learn to classify gender with AUROC=0.75, and more sophisticated transformer-based models achieve AUROC=0.8.We further find that gender predictive values have low correlation with gender direction of embeddings {--} meaning that, what is predictive of gender is much more than what is {\textquotedblleft}gendered{\textquotedblright} in the masculine/feminine sense. We discuss the algorithmic bias and fairness implications of these findings in the hiring context. | null | null | 10.18653/v1/2022.gebnlp-1.7 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,802 |
inproceedings | van-der-wal-etal-2022-birth | The Birth of Bias: A case study on the evolution of gender bias in an {E}nglish language model | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.8/ | Van Der Wal, Oskar and Jumelet, Jaap and Schulz, Katrin and Zuidema, Willem | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 75--75 | Detecting and mitigating harmful biases in modern language models are widely recognized as crucial, open problems. In this paper, we take a step back and investigate how language models come to be biased in the first place. We use a relatively small language model, using the LSTM architecture trained on an English Wikipedia corpus. With full access to the data and to the model parameters as they change during every step while training, we can map in detail how the representation of gender develops, what patterns in the dataset drive this, and how the model`s internal state relates to the bias in a downstream task (semantic textual similarity).We find that the representation of gender is dynamic and identify different phases during training. Furthermore, we show that gender information is represented increasingly locally in the input embeddings of the model and that, as a consequence, debiasing these can be effective in reducing the downstream bias. Monitoring the training dynamics, allows us to detect an asymmetry in how the female and male gender are represented in the input embeddings. This is important, as it may cause naive mitigation strategies to introduce new undesirable biases. We discuss the relevance of the findings for mitigation strategies more generally and the prospects of generalizing our methods to larger language models, the Transformer architecture, other languages and other undesirable biases. | null | null | 10.18653/v1/2022.gebnlp-1.8 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,803 |
inproceedings | akyurek-etal-2022-challenges | Challenges in Measuring Bias via Open-Ended Language Generation | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.9/ | Aky{\"urek, Afra Feyza and Kocyigit, Muhammed Yusuf and Paik, Sejin and Wijaya, Derry Tanti | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 76--76 | Researchers have devised numerous ways to quantify social biases vested in pretrained language models. As some language models are capable of generating coherent completions given a set of textual prompts, several prompting datasets have been proposed to measure biases between social groups{---}posing language generation as a way of identifying biases. In this opinion paper, we analyze how specific choices of prompt sets, metrics, automatic tools and sampling strategies affect bias results. We find out that the practice of measuring biases through text completion is prone to yielding contradicting results under different experiment settings. We additionally provide recommendations for reporting biases in open-ended language generation for a more complete outlook of biases exhibited by a given language model. Code to reproduce the results is released under \url{https://github.com/feyzaakyurek/bias-textgen}. | null | null | 10.18653/v1/2022.gebnlp-1.9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,804 |
inproceedings | srinivasan-bisk-2022-worst | Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language Models | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.10/ | Srinivasan, Tejas and Bisk, Yonatan | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 77--85 | Numerous works have analyzed biases in vision and pre-trained language models individually - however, less attention has been paid to how these biases interact in multimodal settings. This work extends text-based bias analysis methods to investigate multimodal language models, and analyzes intra- and inter-modality associations and biases learned by these models. Specifically, we demonstrate that VL-BERT (Su et al., 2020) exhibits gender biases, often preferring to reinforce a stereotype over faithfully describing the visual scene. We demonstrate these findings on a controlled case-study and extend them for a larger set of stereotypically gendered entities. | null | null | 10.18653/v1/2022.gebnlp-1.10 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,805 |
inproceedings | liu-etal-2022-assessing | Assessing Group-level Gender Bias in Professional Evaluations: The Case of Medical Student End-of-Shift Feedback | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.11/ | Liu, Emmy and Tessler, Michael Henry and Dubosh, Nicole and Hiller, Katherine and Levy, Roger | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 86--93 | Though approximately 50{\%} of medical school graduates today are women, female physicians tend to be underrepresented in senior positions, make less money than their male counterparts and receive fewer promotions. There is a growing body of literature demonstrating gender bias in various forms of evaluation in medicine, but this work was mainly conducted by looking for specific words using fixed dictionaries such as LIWC and focused on global assessments of performance such as recommendation letters. We use a dataset of written and quantitative assessments of medical student performance on individual shifts of work, collected across multiple institutions, to investigate the extent to which gender bias exists in a day-to-day context for medical students. We investigate differences in the narrative comments given to male and female students by both male or female faculty assessors, using a fine-tuned BERT model. This allows us to examine whether groups are written about in systematically different ways, without relying on hand-crafted wordlists or topic models. We compare these results to results from the traditional LIWC method and find that, although we find no evidence of group-level gender bias in this dataset, terms related to family and children are used more in feedback given to women. | null | null | 10.18653/v1/2022.gebnlp-1.11 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,806 |
inproceedings | savoldi-etal-2022-dynamics | On the Dynamics of Gender Learning in Speech Translation | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.12/ | Savoldi, Beatrice and Gaido, Marco and Bentivogli, Luisa and Negri, Matteo and Turchi, Marco | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 94--111 | Due to the complexity of bias and the opaque nature of current neural approaches, there is a rising interest in auditing language technologies. In this work, we contribute to such a line of inquiry by exploring the emergence of gender bias in Speech Translation (ST). As a new perspective, rather than focusing on the final systems only, we examine their evolution over the course of training. In this way, we are able to account for different variables related to the learning dynamics of gender translation, and investigate when and how gender divides emerge in ST. Accordingly, for three language pairs (en ? es, fr, it) we compare how ST systems behave for masculine and feminine translation at several levels of granularity. We find that masculine and feminine curves are dissimilar, with the feminine one being characterized by more erratic behaviour and late improvements over the course of training. Also, depending on the considered phenomena, their learning trends can be either antiphase or parallel. Overall, we show how such a progressive analysis can inform on the reliability and time-wise acquisition of gender, which is concealed by static evaluations and standard metrics. | null | null | 10.18653/v1/2022.gebnlp-1.12 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,807 |
inproceedings | tal-etal-2022-fewer | Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender Bias | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.13/ | Tal, Yarden and Magar, Inbal and Schwartz, Roy | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 112--120 | The size of pretrained models is increasing, and so is their performance on a variety of NLP tasks. However, as their memorization capacity grows, they might pick up more social biases. In this work, we examine the connection between model size and its gender bias (specifically, occupational gender bias). We measure bias in three masked language model families (RoBERTa, DeBERTa, and T5) in two setups: directly using prompt based method, and using a downstream task (Winogender). We find on the one hand that larger models receive higher bias scores on the former task, but when evaluated on the latter, they make fewer gender errors. To examine these potentially conflicting results, we carefully investigate the behavior of the different models on Winogender. We find that while larger models outperform smaller ones, the probability that their mistakes are caused by gender bias is higher. Moreover, we find that the proportion of stereotypical errors compared to anti-stereotypical ones grows with the model size. Our findings highlight the potential risks that can arise from increasing model size. | null | null | 10.18653/v1/2022.gebnlp-1.13 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,808 |
inproceedings | chen-etal-2022-unsupervised | Unsupervised Mitigating Gender Bias by Character Components: A Case Study of {C}hinese Word Embedding | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.14/ | Chen, Xiuying and Li, Mingzhe and Yan, Rui and Gao, Xin and Zhang, Xiangliang | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 121--128 | Word embeddings learned from massive text collections have demonstrated significant levels of discriminative biases. However, debias on the Chinese language, one of the most spoken languages, has been less explored. Meanwhile, existing literature relies on manually created supplementary data, which is time- and energy-consuming. In this work, we propose the first Chinese Gender-neutral word Embedding model (CGE) based on Word2vec, which learns gender-neutral word embeddings without any labeled data. Concretely, CGE utilizes and emphasizes the rich feminine and masculine information contained in radicals, i.e., a kind of component in Chinese characters, during the training procedure. This consequently alleviates discriminative gender biases. Experimental results on public benchmark datasets show that our unsupervised method outperforms the state-of-the-art supervised debiased word embedding models without sacrificing the functionality of the embedding model. | null | null | 10.18653/v1/2022.gebnlp-1.14 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,809 |
inproceedings | sesari-etal-2022-empirical | An Empirical Study on the Fairness of Pre-trained Word Embeddings | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.15/ | Sesari, Emeralda and Hort, Max and Sarro, Federica | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 129--144 | Pre-trained word embedding models are easily distributed and applied, as they alleviate users from the effort to train models themselves. With widely distributed models, it is important to ensure that they do not exhibit undesired behaviour, such as biases against population groups. For this purpose, we carry out an empirical study on evaluating the bias of 15 publicly available, pre-trained word embeddings model based on three training algorithms (GloVe, word2vec, and fastText) with regard to four bias metrics (WEAT, SEMBIAS,DIRECT BIAS, and ECT). The choice of word embedding models and bias metrics is motivated by a literature survey over 37 publications which quantified bias on pre-trained word embeddings. Our results indicate that fastText is the least biased model (in 8 out of 12 cases) and small vector lengths lead to a higher bias. | null | null | 10.18653/v1/2022.gebnlp-1.15 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,810 |
inproceedings | kirtane-anand-2022-mitigating | Mitigating Gender Stereotypes in {H}indi and {M}arathi | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.16/ | Kirtane, Neeraja and Anand, Tanvi | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 145--150 | As the use of natural language processing increases in our day-to-day life, the need to address gender bias inherent in these systems also amplifies. This is because the inherent bias interferes with the semantic structure of the output of these systems while performing tasks in natural language processing. While research is being done in English to quantify and mitigate bias, debiasing methods in Indic Languages are either relatively nascent or absent for some Indic languages altogether. Most Indic languages are gendered, i.e., each noun is assigned a gender according to each language`s rules of grammar. As a consequence, evaluation differs from what is done in English. This paper evaluates the gender stereotypes in Hindi and Marathi languages. The methodologies will differ from the ones in the English language because there are masculine and feminine counterparts in the case of some words. We create a dataset of neutral and gendered occupation words, emotion words and measure bias with the help of Embedding Coherence Test (ECT) and Relative Norm Distance (RND). We also attempt to mitigate this bias from the embeddings. Experiments show that our proposed debiasing techniques reduce gender bias in these languages. | null | null | 10.18653/v1/2022.gebnlp-1.16 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,811 |
inproceedings | orgad-belinkov-2022-choose | Choose Your Lenses: Flaws in Gender Bias Evaluation | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.17/ | Orgad, Hadas and Belinkov, Yonatan | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 151--167 | Considerable efforts to measure and mitigate gender bias in recent years have led to the introduction of an abundance of tasks, datasets, and metrics used in this vein. In this position paper, we assess the current paradigm of gender bias evaluation and identify several flaws in it. First, we highlight the importance of extrinsic bias metrics that measure how a model`s performance on some task is affected by gender, as opposed to intrinsic evaluations of model representations, which are less strongly connected to specific harms to people interacting with systems. We find that only a few extrinsic metrics are measured in most studies, although more can be measured. Second, we find that datasets and metrics are often coupled, and discuss how their coupling hinders the ability to obtain reliable conclusions, and how one may decouple them. We then investigate how the choice of the dataset and its composition, as well as the choice of the metric, affect bias measurement, finding significant variations across each of them. Finally, we propose several guidelines for more reliable gender bias evaluation. | null | null | 10.18653/v1/2022.gebnlp-1.17 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,812 |
inproceedings | mechura-2022-taxonomy | A Taxonomy of Bias-Causing Ambiguities in Machine Translation | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.18/ | M{\v{e}}chura, Michal | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 168--173 | This paper introduces a taxonomy of phenomena which cause bias in machine translation, covering gender bias (people being male and/or female), number bias (singular you versus plural you) and formality bias (informal you versus formal you). Our taxonomy is a formalism for describing situations in machine translation when the source text leaves some of these properties unspecified (eg. does not say whether doctor is male or female) but the target language requires the property to be specified (eg. because it does not have a gender-neutral word for doctor). The formalism described here is used internally by a web-based tool we have built for detecting and correcting bias in the output of any machine translator. | null | null | 10.18653/v1/2022.gebnlp-1.18 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,813 |
inproceedings | marce-poliak-2022-gender | On Gender Biases in Offensive Language Classification Models | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.19/ | Marc{\'e}, Sanjana and Poliak, Adam | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 174--183 | We explore whether neural Natural Language Processing models trained to identify offensive language in tweets contain gender biases. We add historically gendered and gender ambiguous American names to an existing offensive language evaluation set to determine whether models? predictions are sensitive or robust to gendered names. While we see some evidence that these models might be prone to biased stereotypes that men use more offensive language than women, our results indicate that these models? binary predictions might not greatly change based upon gendered names. | null | null | 10.18653/v1/2022.gebnlp-1.19 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,814 |
inproceedings | jentzsch-turan-2022-gender | Gender Bias in {BERT} - Measuring and Analysing Biases through Sentiment Rating in a Realistic Downstream Classification Task | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.20/ | Jentzsch, Sophie and Turan, Cigdem | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 184--199 | Pretrained language models are publicly available and constantly finetuned for various real-life applications. As they become capable of grasping complex contextual information, harmful biases are likely increasingly intertwined with those models. This paper analyses gender bias in BERT models with two main contributions: First, a novel bias measure is introduced, defining biases as the difference in sentiment valuation of female and male sample versions. Second, we comprehensively analyse BERT?s biases on the example of a realistic IMDB movie classifier. By systematically varying elements of the training pipeline, we can conclude regarding their impact on the final model bias. Seven different public BERT models in nine training conditions, i.e. 63 models in total, are compared. Almost all conditions yield significant gender biases. Results indicate that reflected biases stem from public BERT models rather than task-specific data, emphasising the weight of responsible usage. | null | null | 10.18653/v1/2022.gebnlp-1.20 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,815 |
inproceedings | touileb-etal-2022-occupational | Occupational Biases in {N}orwegian and Multilingual Language Models | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.21/ | Touileb, Samia and {\O}vrelid, Lilja and Velldal, Erik | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 200--211 | In this paper we explore how a demographic distribution of occupations, along gender dimensions, is reflected in pre-trained language models. We give a descriptive assessment of the distribution of occupations, and investigate to what extent these are reflected in four Norwegian and two multilingual models. To this end, we introduce a set of simple bias probes, and perform five different tasks combining gendered pronouns, first names, and a set of occupations from the Norwegian statistics bureau. We show that language specific models obtain more accurate results, and are much closer to the real-world distribution of clearly gendered occupations. However, we see that none of the models have correct representations of the occupations that are demographically balanced between genders. We also discuss the importance of the training data on which the models were trained on, and argue that template-based bias probes can sometimes be fragile, and a simple alteration in a template can change a model`s behavior. | null | null | 10.18653/v1/2022.gebnlp-1.21 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,816 |
inproceedings | borchers-etal-2022-looking | Looking for a Handsome Carpenter! Debiasing {GPT}-3 Job Advertisements | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.22/ | Borchers, Conrad and Gala, Dalia and Gilburt, Benjamin and Oravkin, Eduard and Bounsi, Wilfried and Asano, Yuki M and Kirk, Hannah | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 212--224 | The growing capability and availability of generative language models has enabled a wide range of new downstream tasks. Academic research has identified, quantified and mitigated biases present in language models but is rarely tailored to downstream tasks where wider impact on individuals and society can be felt. In this work, we leverage one popular generative language model, GPT-3, with the goal of writing unbiased and realistic job advertisements. We first assess the bias and realism of zero-shot generated advertisements and compare them to real-world advertisements. We then evaluate prompt-engineering and fine-tuning as debiasing methods. We find that prompt-engineering with diversity-encouraging prompts gives no significant improvement to bias, nor realism. Conversely, fine-tuning, especially on unbiased real advertisements, can improve realism and reduce bias. | null | null | 10.18653/v1/2022.gebnlp-1.22 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,817 |
inproceedings | vasquez-etal-2022-heterocorpus | {H}etero{C}orpus: A Corpus for Heteronormative Language Detection | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.23/ | V{\'a}squez, Juan and Bel-Enguix, Gemma and Andersen, Scott Thomas and Ojeda-Trueba, Sergio-Luis | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 225--234 | In recent years, plenty of work has been done by the NLP community regarding gender bias detection and mitigation in language systems. Yet, to our knowledge, no one has focused on the difficult task of heteronormative language detection and mitigation. We consider this an urgent issue, since language technologies are growing increasingly present in the world and, as it has been proven by various studies, NLP systems with biases can create real-life adverse consequences for women, gender minorities and racial minorities and queer people. For these reasons, we propose and evaluate HeteroCorpus; a corpus created specifically for studying heterononormative language in English. Additionally, we propose a baseline set of classification experiments on our corpus, in order to show the performance of our corpus in classification tasks. | null | null | 10.18653/v1/2022.gebnlp-1.23 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,818 |
inproceedings | bertsch-etal-2022-evaluating | Evaluating Gender Bias Transfer from Film Data | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.24/ | Bertsch, Amanda and Oh, Ashley and Natu, Sanika and Gangu, Swetha and Black, Alan W. and Strubell, Emma | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 235--243 | Films are a rich source of data for natural language processing. OpenSubtitles (Lison and Tiedemann, 2016) is a popular movie script dataset, used for training models for tasks such as machine translation and dialogue generation. However, movies often contain biases that reflect society at the time, and these biases may be introduced during pre-training and influence downstream models. We perform sentiment analysis on template infilling (Kurita et al., 2019) and the Sentence Embedding Association Test (May et al., 2019) to measure how BERT-based language models change after continued pre-training on OpenSubtitles. We consider gender bias as a primary motivating case for this analysis, while also measuring other social biases such as disability. We show that sentiment analysis on template infilling is not an effective measure of bias due to the rarity of disability and gender identifying tokens in the movie dialogue. We extend our analysis to a longitudinal study of bias in film dialogue over the last 110 years and find that continued pre-training on OpenSubtitles encodes additional bias into BERT. We show that BERT learns associations that reflect the biases and representation of each film era, suggesting that additional care must be taken when using historical data. | null | null | 10.18653/v1/2022.gebnlp-1.24 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,819 |
inproceedings | hansal-etal-2022-indigenous | Indigenous Language Revitalization and the Dilemma of Gender Bias | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.25/ | Hansal, Oussama and Le, Ngoc Tan and Sadat, Fatiha | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 244--254 | Natural Language Processing (NLP), through its several applications, has been considered as one of the most valuable field in interdisciplinary researches, as well as in computer science. However, it is not without its flaws. One of the most common flaws is bias. This paper examines the main linguistic challenges of Inuktitut, an indigenous language of Canada, and focuses on gender bias identification and mitigation. We explore the unique characteristics of this language to help us understand the right techniques that can be used to identify and mitigate implicit biases. We use some methods to quantify the gender bias existing in Inuktitut word embeddings; then we proceed to mitigate the bias and evaluate the performance of the debiased embeddings. Next, we explain how approaches for detecting and reducing bias in English embeddings may be transferred to Inuktitut embeddings by properly taking into account the language`s particular characteristics. Next, we compare the effect of the debiasing techniques on Inuktitut and English. Finally, we highlight some future research directions which will further help to push the boundaries. | null | null | 10.18653/v1/2022.gebnlp-1.25 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,820 |
inproceedings | jeoung-diesner-2022-changed | What changed? Investigating Debiasing Methods using Causal Mediation Analysis | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.26/ | Jeoung, Sullam and Diesner, Jana | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 255--265 | Previous work has examined how debiasing language models affect downstream tasks, specifically, how debiasing techniques influence task performance and whether debiased models also make impartial predictions in downstream tasks or not. However, what we don`t understand well yet is why debiasing methods have varying impacts on downstream tasks and how debiasing techniques affect internal components of language models, i.e., neurons, layers, and attentions. In this paper, we decompose the internal mechanisms of debiasing language models with respect to gender by applying causal mediation analysis to understand the influence of debiasing methods on toxicity detection as a downstream task. Our findings suggest a need to test the effectiveness of debiasing methods with different bias metrics, and to focus on changes in the behavior of certain components of the models, e.g.,first two layers of language models, and attention heads. | null | null | 10.18653/v1/2022.gebnlp-1.26 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,821 |
inproceedings | ahn-etal-2022-knowledge | Why Knowledge Distillation Amplifies Gender Bias and How to Mitigate from the Perspective of {D}istil{BERT} | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.27/ | Ahn, Jaimeen and Lee, Hwaran and Kim, Jinhwa and Oh, Alice | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 266--272 | Knowledge distillation is widely used to transfer the language understanding of a large model to a smaller model. However, after knowledge distillation, it was found that the smaller model is more biased by gender compared to the source large model. This paper studies what causes gender bias to increase after the knowledge distillation process. Moreover, we suggest applying a variant of the mixup on knowledge distillation, which is used to increase generalizability during the distillation process, not for augmentation. By doing so, we can significantly reduce the gender bias amplification after knowledge distillation. We also conduct an experiment on the GLUE benchmark to demonstrate that even if the mixup is applied, it does not have a significant adverse effect on the model`s performance. | null | null | 10.18653/v1/2022.gebnlp-1.27 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,822 |
inproceedings | pant-dadu-2022-incorporating | Incorporating Subjectivity into Gendered Ambiguous Pronoun ({GAP}) Resolution using Style Transfer | Hardmeier, Christian and Basta, Christine and Costa-juss{\`a}, Marta R. and Stanovsky, Gabriel and Gonen, Hila | jul | 2022 | Seattle, Washington | Association for Computational Linguistics | https://aclanthology.org/2022.gebnlp-1.28/ | Pant, Kartikey and Dadu, Tanvi | Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) | 273--281 | The GAP dataset is a Wikipedia-based evaluation dataset for gender bias detection in coreference resolution, containing mostly objective sentences. Since subjectivity is ubiquitous in our daily texts, it becomes necessary to evaluate models for both subjective and objective instances. In this work, we present a new evaluation dataset for gender bias in coreference resolution, GAP-Subjective, which increases the coverage of the original GAP dataset by including subjective sentences. We outline the methodology used to create this dataset. Firstly, we detect objective sentences and transfer them into their subjective variants using a sequence-to-sequence model. Secondly, we outline the thresholding techniques based on fluency and content preservation to maintain the quality of the sentences. Thirdly, we perform automated and human-based analysis of the style transfer and infer that the transferred sentences are of high quality. Finally, we benchmark both GAP and GAP-Subjective datasets using a BERT-based model and analyze its predictive performance and gender bias. | null | null | 10.18653/v1/2022.gebnlp-1.28 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,823 |
inproceedings | bonetti-tonelli-2022-analysis | An Analysis of Abusive Language Data Collected through a Game with a Purpose | Madge, Chris | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.games-1.1/ | Bonetti, Federico and Tonelli, Sara | Proceedings of the 9th Workshop on Games and Natural Language Processing within the 13th Language Resources and Evaluation Conference | 1--6 | In this work we present an analysis of abusive language annotations collected through a 3D video game. With this approach, we are able to involve in the annotation teenagers, i.e. typical targets of cyberbullying, whose data are usually not available for research purposes. Using the game in the framework of educational activities to empower teenagers against online abuse we are able to obtain insights into how teenagers communicate, and what kind of messages they consider more offensive. While players produced interesting annotations and the distributions of classes between players and experts are similar, we obtained a significant number of mismatching judgements between experts and players. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,825 |
inproceedings | hou-etal-2022-applying | Applying Gamification Incentives in the Revita Language-learning System | Madge, Chris | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.games-1.2/ | Hou, Jue and Kylli{\"ainen, Ilmari and Katinskaia, Anisia and Furlan, Giacomo and Yangarber, Roman | Proceedings of the 9th Workshop on Games and Natural Language Processing within the 13th Language Resources and Evaluation Conference | 7--16 | We explore the importance of gamification features in a language-learning platform designed for intermediate-to-advanced learners. Our main thesis is: learning toward advanced levels requires a massive investment of time. If the learner engages in more practice sessions, and if the practice sessions are longer, we can expect the results to be better. This principle appears to be tautologically self-evident. Yet, keeping the learner engaged in general{---}and building gamification features in particular{---}requires substantial efforts on the part of developers. Our goal is to keep the learner engaged in long practice sessions over many months{---}rather than for the short-term. This creates a conflict: In academic research on language learning, resources are typically scarce, and gamification usually is not considered an essential priority for allocating resources. We argue in favor of giving serious consideration to gamification in the language-learning setting{---}as a means of enabling in-depth research. In this paper, we introduce several gamification incentives in the Revita language-learning platform. We discuss the problems in obtaining quantitative measures of the effectiveness of gamification features. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,826 |
inproceedings | althani-etal-2022-less | Less Text, More Visuals: Evaluating the Onboarding Phase in a {GWAP} for {NLP} | Madge, Chris | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.games-1.3/ | Althani, Fatima and Madge, Chris and Poesio, Massimo | Proceedings of the 9th Workshop on Games and Natural Language Processing within the 13th Language Resources and Evaluation Conference | 17--27 | Games-with-a-purpose find attracting players a challenge. To improve player recruitment, we explored two game design elements that can increase player engagement during the onboarding phase; a narrative and a tutorial. In a qualitative study with 12 players of linguistic and language learning games, we examined the effect of presentation format on players' engagement. Our reflexive thematic analysis found that in the onboarding phase of a GWAP for NLP, presenting players with visuals is expected and pre- senting too much text overwhelms them. Furthermore, players found that the instructions they were presented with lacked linguistic context. Additionally, the tutorial and game interface required refinement as the feedback is unsupportive and the graphics were not clear. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,827 |
inproceedings | okur-etal-2022-nlu | {NLU} for Game-based Learning in Real: Initial Evaluations | Madge, Chris | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.games-1.4/ | Okur, Eda and Sahay, Saurav and Nachman, Lama | Proceedings of the 9th Workshop on Games and Natural Language Processing within the 13th Language Resources and Evaluation Conference | 28--39 | Intelligent systems designed for play-based interactions should be contextually aware of the users and their surroundings. Spoken Dialogue Systems (SDS) are critical for these interactive agents to carry out effective goal-oriented communication with users in real-time. For the real-world (i.e., in-the-wild) deployment of such conversational agents, improving the Natural Language Understanding (NLU) module of the goal-oriented SDS pipeline is crucial, especially with limited task-specific datasets. This study explores the potential benefits of a recently proposed transformer-based multi-task NLU architecture, mainly to perform Intent Recognition on small-size domain-specific educational game datasets. The evaluation datasets were collected from children practicing basic math concepts via play-based interactions in game-based learning settings. We investigate the NLU performances on the initial proof-of-concept game datasets versus the real-world deployment datasets and observe anticipated performance drops in-the-wild. We have shown that compared to the more straightforward baseline approaches, Dual Intent and Entity Transformer (DIET) architecture is robust enough to handle real-world data to a large extent for the Intent Recognition task on these domain-specific in-the-wild game datasets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,828 |
inproceedings | ward-etal-2022-nlp | How {NLP} Can Strengthen Digital Game Based Language Learning Resources for Less Resourced Languages | Madge, Chris | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.games-1.5/ | Ward, Monica and Xu, Liang and Dhonnchadha, Elaine U{\'i} | Proceedings of the 9th Workshop on Games and Natural Language Processing within the 13th Language Resources and Evaluation Conference | 40--48 | This paper provides an overview of the Cipher engine which enables the development of a Digital Educational Game (DEG) based on noticing ciphers or patterns in texts. The Cipher engine was used to develop the Cipher: Faoi Gheasa, a digital educational game for Irish, which incorporates NLP resources and is informed by Digital Game-Based Language Learning (DGBLL) and Computer-Assisted Language Learning (CALL) research. The paper outlines six phases where NLP has strengthened the Cipher: Faoi Gheasa game. It shows how the Cipher engine can be used to build a Cipher game for other languages, particularly low-resourced and endangered languages in which NLP resources are under-developed or few in number. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,829 |
inproceedings | chaiko-etal-2022-actors | The {\textquotedblleft}Actors Challenge{\textquotedblright} Project: Collecting Data on Intonation Profiles via a Web Game | Madge, Chris | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.games-1.6/ | Chaiko, Natallia and Sepanta, Sia and Zamparelli, Roberto | Proceedings of the 9th Workshop on Games and Natural Language Processing within the 13th Language Resources and Evaluation Conference | 49--53 | This paper describes {\textquotedblright}Actors Challenge{\textquotedblright}, a soon-to-go-public web game where the players alternate in the double role of actors and judges of other players' acted-out utterances, and in the process create an oral dataset of prosodic contours that can disambiguate textually identical utterances in different contexts. The game is undergoing alpha testing and should be deployed within a few months. We discuss the need, the core mechanism and the challenges ahead. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,830 |
inproceedings | newman-liu-2022-generating | Generating Descriptive and Rules-Adhering Spells for Dungeons {\&} Dragons Fifth Edition | Madge, Chris | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.games-1.7/ | Newman, Pax and Liu, Yudong | Proceedings of the 9th Workshop on Games and Natural Language Processing within the 13th Language Resources and Evaluation Conference | 54--60 | We examine the task of generating unique content for the spell system of the tabletop roleplaying game Dungeons and Dragons Fifth Edition using several generative language models. Due to the descriptive nature of the game Dungeons and Dragons Fifth Edition, it presents a number of interesting avenues for generation and analysis of text. In particular, the {\textquotedblleft}spell{\textquotedblright} system of the game has interesting and unique characteristics as it is primarily made up of high level and descriptive text but has many of the game`s main rules embedded with that text. Thus, we examine the capabilities of several models on the task of generating new content for this game, evaluating the performance through the use of both score-based methods and a survey on the best performing model to determine how the generated content conforms to the rules of the game and how well they might be used in the game. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,831 |
inproceedings | ghosh-etal-2022-finrad | {F}in{RAD}: Financial Readability Assessment Dataset - 13,000+ Definitions of Financial Terms for Measuring Readability | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.1/ | Ghosh, Sohom and Sengupta, Shovon and Naskar, Sudip and Singh, Sunny Kumar | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 1--9 | In today`s world, the advancement and spread of the Internet and digitalization have resulted in most information being openly accessible. This holds true for financial services as well. Investors make data driven decisions by analysing publicly available information like annual reports of listed companies, details regarding asset allocation of mutual funds, etc. Many a time these financial documents contain unknown financial terms. In such cases, it becomes important to look at their definitions. However, not all definitions are equally readable. Readability largely depends on the structure, complexity and constituent terms that make up a definition. This brings in the need for automatically evaluating the readability of definitions of financial terms. This paper presents a dataset, FinRAD consisting of financial terms, their definitions and embeddings. In addition to standard readability scores (like {\textquotedblleft}Flesch Reading Index (FRI){\textquotedblright}, {\textquotedblleft}Automated Readability Index (ARI){\textquotedblright}, {\textquotedblleft}SMOG Index Score (SIS){\textquotedblright},{\textquotedblleft}Dale-Chall formula (DCF){\textquotedblright}, etc.), it also contains the readability scores (AR) assigned based on sources from which the terms have been collected. We manually inspect a sample from it to ensure the quality of the assignment. Subsequently, we prove that the rule-based standard readability scores (like {\textquotedblleft}Flesch Reading Index (FRI){\textquotedblright}, {\textquotedblleft}Automated Readability Index (ARI){\textquotedblright}, {\textquotedblleft}SMOG Index Score (SIS){\textquotedblright},{\textquotedblleft}Dale-Chall formula (DCF){\textquotedblright}, etc.) do not correlate well with the manually assigned binary readability scores of definitions of financial terms. Finally, we present a few neural baselines using transformer based architecture to automatically classify these definitions as readable or not. Pre-trained FinBERT model fine-tuned on FinRAD corpus performs the best (AU-ROC = 0.9927 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 0.9610). This corpus can be downloaded from \url{https://github.com/sohomghosh/FinRAD_Financial_Readability_Assessment_Dataset}." } | null | null | null | null | null | null | null | null | null | 25,833 |
inproceedings | peng-etal-2022-discovering | Discovering Financial Hypernyms by Prompting Masked Language Models | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.2/ | Peng, Bo and Chersoni, Emmanuele and Hsu, Yu-Yin and Huang, Chu-Ren | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 10--16 | With the rising popularity of Transformer-based language models, several studies have tried to exploit their masked language modeling capabilities to automatically extract relational linguistic knowledge, although this kind of research has rarely investigated semantic relations in specialized domains. The present study aims at testing a general-domain and a domain-adapted Transformer models on two datasets of financial term-hypernym pairs using the prompt methodology. Our results show that the differences of prompts impact critically on models' performance, and that domain adaptation on financial text generally improves the capacity of the models to associate the target terms with the right hypernyms, although the more successful models are those retaining a general-domain vocabulary. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,834 |
inproceedings | stepisnik-perdih-etal-2022-sentiment | Sentiment Classification by Incorporating Background Knowledge from Financial Ontologies | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.3/ | Stepi{\v{s}}nik-Perdih, Timen and Pelicon, Andra{\v{z}} and {\v{S}}krlj, Bla{\v{z}} and {\v{Z}}nidar{\v{s}}i{\v{c}}, Martin and Lon{\v{c}}arski, Igor and Pollak, Senja | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 17--26 | Ontologies are increasingly used for machine reasoning over the last few years. They can provide explanations of concepts or be used for concept classification if there exists a mapping from the desired labels to the relevant ontology. This paper presents a practical use of an ontology for the purpose of data set generalization in an oversampling setting, with the aim of improving classification models. We demonstrate our solution on a novel financial sentiment data set using the Financial Industry Business Ontology (FIBO). The results show that generalization-based data enrichment benefits simpler models in a general setting and more complex models such as BERT in low-data setting. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,835 |
inproceedings | tsutsumi-utsuro-2022-detecting | Detecting Causes of Stock Price Rise and Decline by Machine Reading Comprehension with {BERT} | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.4/ | Tsutsumi, Gakuto and Utsuro, Takehito | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 27--35 | In this paper, we focused on news reported when stock prices fluctuate significantly. The news reported when stock prices change is a very useful source of information on what factors cause stock prices to change. However, because it is manually produced, not all events that cause stock prices to change are necessarily reported. Thus, in order to provide investors with information on those causes of stock price changes, it is necessary to develop a system to collect information on events that could be closely related to the stock price changes of certain companies from the Internet. As the first step towards developing such a system, this paper takes an approach of employing a BERT-based machine reading comprehension model, which extracts causes of stock price rise and decline from news reports on stock price changes. In the evaluation, the approach of using the title of the article as the question of machine reading comprehension performs well. It is shown that the fine-tuned machine reading comprehension model successfully detects additional causes of stock price rise and decline other than those stated in the title of the article. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,836 |
inproceedings | mohamad-zamani-etal-2022-xlnet | {XLNET}-{GRU} Sentiment Regression Model for Cryptocurrency News in {E}nglish and {M}alay | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.5/ | Mohamad Zamani, Nur Azmina and Liew, Jasy Suet Yan and Yusof, Ahmad Muhyiddin | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 36--42 | Contextual word embeddings such as the transformer language models are gaining popularity in text classification and analytics but have rarely been explored for sentiment analysis on cryptocurrency news particularly on languages other than English. Various state-of-the-art (SOTA) pre-trained language models have been introduced recently such as BERT, ALBERT, ELECTRA, RoBERTa, and XLNet for text representation. Hence, this study aims to investigate the performance of using Gated Recurrent Unit (GRU) with Generalized Autoregressive Pretraining for Language (XLNet) contextual word embedding for sentiment analysis on English and Malay cryptocurrency news (Bitcoin and Ethereum). We also compare the performance of our XLNet-GRU model against other SOTA pre-trained language models. Manually labelled corpora of English and Malay news are utilized to learn the context of text specifically in the cryptocurrency domain. Based on our experiments, we found that our XLNet-GRU sentiment regression model outperformed the lexicon-based baseline with mean adjusted R2 = 0.631 across Bitcoin and Ethereum for English and mean adjusted R2 = 0.514 for Malay. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,837 |
inproceedings | el-haj-etal-2022-financial | The Financial Narrative Summarisation Shared Task ({FNS} 2022) | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.6/ | El-Haj, Mahmoud and Zmandar, Nadhem and Rayson, Paul and AbuRa{'}ed, Ahmed and Litvak, Marina and Pittaras, Nikiforos and Giannakopoulos, George and Kosmopoulos, Aris and Carbajo-Coronado, Blanca and Moreno-Sandoval, Antonio | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 43--52 | This paper presents the results and findings of the Financial Narrative Summarisation Shared Task on summarising UK, Greek and Spanish annual reports. The shared task was organised as part of the Financial Narrative Processing 2022 Workshop (FNP 2022 Workshop). The Financial Narrative summarisation Shared Task (FNS-2022) has been running since 2020 as part of the Financial Narrative Processing (FNP) workshop series (El-Haj et al., 2022; El-Haj et al., 2021; El-Haj et al., 2020b; El-Haj et al., 2019c; El-Haj et al., 2018). The shared task included one main task which is the use of either abstractive or extractive automatic summarisers to summarise long documents in terms of UK, Greek and Spanish financial annual reports. This shared task is the third to target financial documents. The data for the shared task was created and collected from publicly available annual reports published by firms listed on the Stock Exchanges of UK, Greece and Spain. A total number of 14 systems from 7 different teams participated in the shared task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,838 |
inproceedings | foroutan-etal-2022-multilingual | Multilingual Text Summarization on Financial Documents | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.7/ | Foroutan, Negar and Romanou, Angelika and Massonnet, St{\'e}phane and Lebret, R{\'e}mi and Aberer, Karl | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 53--58 | This paper proposes a multilingual Automated Text Summarization (ATS) method targeting the Financial Narrative Summarization Task (FNS-2022). We developed two systems; the first uses a pre-trained abstractive summarization model that was fine-tuned on the downstream objective, the second approaches the problem as an extractive approach in which a similarity search is performed on the trained span representations. Both models aim to identify the beginning of the continuous narrative section of the document. The language models were fine-tuned on a financial document collection of three languages (English, Spanish, and Greek) and aim to identify the beginning of the summary narrative part of the document. The proposed systems achieve high performance in the given task, with the sequence-to-sequence variant ranked 1st on ROUGE-2 F1 score on the test set for each of the three languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,839 |
inproceedings | vaca-etal-2022-extractive | Extractive and Abstractive Summarization Methods for Financial Narrative Summarization in {E}nglish, {S}panish and {G}reek | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.8/ | Vaca, Alejandro and Segurado, Alba and Betancur, David and Barbero Jim{\'e}nez, {\'A}lvaro | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 59--64 | This paper describes the three summarization systems submitted to the Financial Narrative Summarization Shared Task (FNS-2022). We developed a task-specific extractive summarization method for the reports in English. It was based on a sequence classification task whose objective was to find the sentence where the summary begins. On the other hand, since the summaries for the reports in Spanish and Greek were not extractive, we used an abstractive strategy for each of the languages. In particular, we created a new Encoder-Decoder architecture in Spanish, MariMari, based on an existing Encoding-only model; we also trained multilingual Encoder-Decoder models for this task. Finally, the summaries for the reports in Greek were obtained with a translation-summary-translation system in which the reports were translated to English and summarised, and then the summaries were translated back to Greek. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,840 |
inproceedings | shukla-etal-2022-dimsum | {D}i{MS}um: Distributed and Multilingual Summarization of Financial Narratives | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.9/ | Shukla, Neelesh and Vaid, Amit and Katikeri, Raghu and Keeriyadath, Sangeeth and Raja, Msp | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 65--72 | This paper was submitted for Financial Narrative Summarization (FNS) task in FNP-2022 workshop. The objective of the task was to generate not more than 1000 words summaries for the annual financial reports written in English, Spanish and Greek languages. The central idea of this paper is to demonstrate automatic ways of identifying key narrative sections and their contributions towards generating summaries of financial reports. We have observed a few limitations in the previous works: First, the complete report was being considered for summary generation instead of key narrative sections. Second, many of the works followed manual or heuristic-based techniques to identify narrative sections. Third, sentences from key narrative sections were abruptly dropped to limit the summary to the desired length. To overcome these shortcomings, we introduced a novel approach to automatically learn key narrative sections and their weighted contributions to the reports. Since the summaries may come from various parts of the reports, the summary generation process was distributed amongst the key narrative sections based on the weights identified, later combined to have an overall summary. We also showcased that our approach is adaptive to various report formats and languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,841 |
inproceedings | khanna-etal-2022-transformer | Transformer-based Models for Long Document Summarisation in Financial Domain | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.10/ | Khanna, Urvashi and Ghodratnama, Samira and Moll ́a, Diego and Beheshti, Amin | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 73--78 | Summarisation of long financial documents is a challenging task due to the lack of large-scale datasets and the need for domain knowledge experts to create human-written summaries. Traditional summarisation approaches that generate a summary based on the content cannot produce summaries comparable to human-written ones and thus are rarely used in practice. In this work, we use the Longformer-Encoder-Decoder (LED) model to handle long financial reports. We describe our experiments and participating systems in the financial narrative summarisation shared task. Multi-stage fine-tuning helps the model generalise better on niche domains and avoids the problem of catastrophic forgetting. We further investigate the effect of the staged fine-tuning approach on the FNS dataset. Our systems achieved promising results in terms of ROUGE scores on the validation dataset. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,842 |
inproceedings | el-haj-ogden-2022-financial | Financial Narrative Summarisation Using a Hybrid {TF}-{IDF} and Clustering Summariser: {AO}-Lancs System at {FNS} 2022 | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.11/ | El-Haj, Mahmoud and Ogden, Andrew | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 79--82 | This paper describes the HTAC system submitted to the Financial Narrative Summarization Shared Task (FNS-2022). A methodology implementing Financial narrative Processing (FNP) to summarise financial annual reports, named Hybrid TF-IDF and Clustering (HTAC). This involves a hybrid approach combining TF-IDF sentence ranking as an NLP tool with a state-of-the-art Clustering Machine learning model to produce short 1000-word summaries of long financial annual reports. These Annual Reports are a legal responsibility of public companies and are in excess of 50,000 words. The model extracts the crucial information from these documents, discarding the extraneous content, leaving only the crucial information in a shorter, non-redundant summary. Producing summaries that are more effective than summaries produced by two pre-existing generic summarisers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,843 |
inproceedings | kang-etal-2022-financial | The Financial Document Structure Extraction Shared Task ({F}in{TOC} 2022) | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.12/ | Kang, Juyeon and Ait Azzi, Abderrahim and Bellato, Sandra and Carbajo Coronado, Blanca and El-Haj, Mahmoud and El Maarouf, Ismail and Gan, Mei and Gisbert, Ana and Moreno Sandoval, Antonio | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 83--88 | This paper describes the FinTOC-2022 Shared Task on the structure extraction from financial documents, its participants results and their findings. This shared task was organized as part of The 4th Financial Narrative Processing Workshop (FNP 2022), held jointly at The 13th Edition of the Language Resources and Evaluation Conference (LREC 2022), Marseille, France (El-Haj et al., 2022). This shared task aimed to stimulate research in systems for extracting table-of-contents (TOC) from investment documents (such as financial prospectuses) by detecting the document titles and organizing them hierarchically into a TOC. For the forth edition of this shared task, three subtasks were presented to the participants: one with English documents, one with French documents and the other one with Spanish documents. This year, we proposed a different and revised dataset for English and French compared to the previous editions of FinTOC and a new dataset for Spanish documents was added. The task attracted 6 submissions for each language from 4 teams, and the most successful methods make use of textual, structural and visual features extracted from the documents and propose classification models for detecting titles and TOCs for all of the subtasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,844 |
inproceedings | bogatenkova-etal-2022-ispras | {ISPRAS}@{F}in{TOC}-2022 Shared Task: Two-stage {TOC} Generation Model | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.13/ | Bogatenkova, Anastasiia and Belyaeva, Oksana Vladimirovna and Perminov, Andrew Igorevich and Kozlov, Ilya Sergeevich | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 89--94 | This work is connected with participation in FinTOC-2022 Shared Task: {\textquotedblleft}Financial Document Structure Extraction{\textquotedblright}. The competition contains two subtasks: title detection and TOC generation. We describe an approach for solving these tasks and propose the pipeline, consisting of extraction of document lines and existing TOC, feature matrix forming and classification. Classification model consists of two classifiers: the first binary classifier separates title lines from non-title, the second one determines the title level. In the title detection task, we got 0.900, 0.778 and 0.558 F1 measure, in the TOC generation task we got 63.1, 41.5 and 40.79 the harmonic mean of Inex F1 score and Inex level accuracy for English, French and Spanish documents respectively. With these results, our approach took first place among English and French submissions and second place among Spanish submissions. As a team, we took first place in the competition in English and French categories and second place in the competition in Spanish. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,845 |
inproceedings | cassotti-etal-2022-swapuniba | swap{UNIBA}@{F}in{TOC}2022: Fine-tuning Pre-trained Document Image Analysis Model for Title Detection on the Financial Domain | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.14/ | Cassotti, Pierluigi and Musto, Cataldo and DeGemmis, Marco and Lekkas, Georgios and Semeraro, Giovanni | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 95--99 | In this paper, we introduce the results of our submitted system to the FinTOC 2022 task. We address the task using a two-stage process: first, we detect titles using Document Image Analysis, then we train a supervised model for the hierarchical level prediction. We perform Document Image Analysis using a pre-trained Faster R-CNN on the PublyaNet dataset. We fine-tuned the model on the FinTOC 2022 training set. We extract orthographic and layout features from detected titles and use them to train a Random Forest model to predict the title level. The proposed system ranked {\#}1 on both Title Detection and the Table of Content extraction tasks for Spanish. The system ranked {\#}3 on both the two subtasks for English and French. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,846 |
inproceedings | giguet-lucas-2022-greyc | {GREYC}@{F}in{TOC}-2022: Handling Document Layout and Structure in Native {PDF} Bundle of Documents | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.15/ | Giguet, Emmanuel and Lucas, Nadine | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 100--104 | n this paper, we present our contribution to the FinTOC-2022 Shared Task {\textquotedblleft}Financial Document Structure Extraction{\textquotedblright}. We participated in the three tracks dedicated to English, French and Spanish document processing. Our main contribution consists in considering financial prospectus as a bundle of documents, i.e., a set of merged documents, each with their own layout and structure. Therefore, Document Layout and Structure Analysis (DLSA) first starts with the boundary detection of each document using general layout features. Then, the process applies inside each single document, taking advantage of the local properties. DLSA is achieved considering simultaneously text content, vectorial shapes and images embedded in the native PDF document. For the Title Detection task in English and French, we observed a significant improvement of the F-measures for Title Detection compared with those obtained during our previous participation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,847 |
inproceedings | mariko-etal-2022-financial | The Financial Causality Extraction Shared Task ({F}in{C}ausal 2022) | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.16/ | Mariko, Dominique and Abi-Akl, Hanna and Trottier, Kim and El-Haj, Mahmoud | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 105--107 | We present the FinCausal 2020 Shared Task on Causality Detection in Financial Documents and the associated FinCausal dataset, and discuss the participating systems and results. The task focuses on detecting if an object, an event or a chain of events is considered a cause for a prior event. This shared task focuses on determining causality associated with a quantified fact. An event is defined as the arising or emergence of a new object or context in regard to a previous situation. Therefore, the task will emphasise the detection of causality associated with transformation of financial objects embedded in quantified facts. A total number of 7 teams submitted system runs to the FinCausal task and contributed with a system description paper. FinCausal shared task is associated with the 4th Financial Narrative Processing Workshop (FNP 2022) (El-Haj et al., 2022) which is held at the The 13th Language Resources and Evaluation Conference (LREC 2022) in Marseille, France, on June 24, 2022. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,848 |
inproceedings | saha-etal-2022-spock | {SPOCK} at {F}in{C}ausal 2022: Causal Information Extraction Using Span-Based and Sequence Tagging Models | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.17/ | Saha, Anik and Ni, Jian and Hassanzadeh, Oktie and Gittens, Alex and Srinivas, Kavitha and Yener, Bulent | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 108--111 | Causal information extraction is an important task in natural language processing, particularly in finance domain. In this work, we develop several information extraction models using pre-trained transformer-based language models for identifying cause and effect text spans from financial documents. We use FinCausal 2021 and 2022 data sets to train span-based and sequence tagging models. Our ensemble of sequence tagging models based on the RoBERTa-Large pre-trained language model achieves an F1 score of 94.70 with Exact Match score of 85.85 and obtains the 1st place in the FinCausal 2022 competition. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,849 |
inproceedings | pant-chopra-2022-multilingual | Multilingual Financial Documentation Summarization by {T}eam{\_}{T}redence for {FNS}2022 | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.18/ | Pant, Manish and Chopra, Ankush | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 112--115 | This paper describes multi-lingual long document summarization systems submitted to the Financial Narrative Summarization Shared Task (FNS 2022 ) by Team-Tredence. We developed task-specific summarization methods for 3 languages {--} English, Spanish and Greek. The solution is divided into two parts, where a RoBERTa model was finetuned to identify/extract summarizing segments from English documents and T5 based models were used for summarizing Spanish and Greek documents. A purely extractive approach was applied to summarize English documents using data-specific heuristics. An mT5 model was fine-tuned to identify potential narrative sections for Greek and Spanish, followed by finetuning mT5 and T5(Spanish version) for abstractive summarization task. This system also features a novel approach for generating summarization training dataset using long document segmentation and the semantic similarity across segments. We also introduce an N-gram variability score to select sub-segments for generating more diverse and informative summaries from long documents. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,850 |
inproceedings | lyu-etal-2022-dcu | {DCU}-Lorcan at {F}in{C}ausal 2022: Span-based Causality Extraction from Financial Documents using Pre-trained Language Models | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.19/ | Lyu, Chenyang and Ji, Tianbo and Sun, Quanwei and Zhou, Liting | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 116--120 | In this paper, we describe our DCU-Lorcan system for the FinCausal 2022 shared task: span-based cause and effect extraction from financial documents. We frame the FinCausal 2022 causality extraction task as a span extraction/sequence labeling task, our submitted systems are based on the contextualized word representations produced by pre-trained language models and linear layers predicting the label for each word, followed by post-processing heuristics. In experiments, we employ pre-trained language models including DistilBERT, BERT and SpanBERT. Our best performed system achieves F-1, Recall, Precision and Exact Match scores of 92.76, 92.77, 92.76 and 68.60 respectively. Additionally, we conduct experiments investigating the effect of data size to the performance of causality extraction model and an error analysis investigating the outputs in predictions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,851 |
inproceedings | ghosh-naskar-2022-lipi | {LIPI} at {F}in{C}ausal 2022: Mining Causes and Effects from Financial Texts | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.20/ | Ghosh, Sohom and Naskar, Sudip | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 121--123 | While reading financial documents, investors need to know the causes and their effects. This empowers them to make data-driven decisions. Thus, there is a need to develop an automated system for extracting causes and their effects from financial texts using Natural Language Processing. In this paper, we present the approach our team LIPI followed while participating in the FinCausal 2022 shared task. This approach is based on the winning solution of the first edition of FinCausal held in the year 2020. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,852 |
inproceedings | xu-etal-2022-ilab | i{L}ab at {F}in{C}ausal 2022: Enhancing Causality Detection with an External Cause-Effect Knowledge Graph | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.21/ | Xu, Ziwei and Nararatwong, Rungsiman and Kertkeidkachorn, Natthawut and Ichise, Ryutaro | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 124--127 | The application of span detection grows fast along with the increasing need of understanding the causes and effects of events, especially in the finance domain. However, once the syntactic clues are absent in the text, the model tends to reverse the cause and effect spans. To solve this problem, we introduce graph construction techniques to inject cause-effect graph knowledge for graph embedding. The graph features combining with BERT embedding, then are used to predict the cause effect spans. The results show our proposed graph builder method outperforms the other methods w/wo external knowledge. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,853 |
inproceedings | mondal-etal-2022-expertneurons | {E}xpert{N}eurons at {F}in{C}ausal 2022 Task 2: Causality Extraction for Financial Documents | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.22/ | Mondal, Joydeb and Bhat, Nagaraj and Sarkar, Pramir and Reza, Shahid | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 128--130 | In this paper describes the approach which we have built for causality extraction from the financial documents that we have submitted for FinCausal 2022 task 2. We proving a solution with intelligent pre-processing and post-processing to detect the number of cause and effect in a financial document and extract them. Our given approach achieved 90{\%} as F1 score(weighted-average) for the official blind evaluation dataset. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,854 |
inproceedings | naskar-etal-2022-atl | {ATL} at {F}in{C}ausal 2022: Transformer Based Architecture for Automatic Causal Sentence Detection and Cause-Effect Extraction | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.23/ | Naskar, Abir and Dasgupta, Tirthankar and Jana, Sudeshna and Dey, Lipika | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 131--134 | Automatic extraction of cause-effect relationships from natural language texts is a challenging open problem in Artificial Intelligence. Most of the early attempts at its solution used manually constructed linguistic and syntactic rules on restricted domain data sets. With the advent of big data, and the recent popularization of deep learning, the paradigm to tackle this problem has slowly shifted. In this work we proposed a transformer based architecture to automatically detect causal sentences from textual mentions and then identify the corresponding cause-effect relations. We describe our submission to the FinCausal 2022 shared task based on this method. Our model achieves a F1-score of 0.99 for the Task-1 and F1-score of 0.60 for Task-2 on the shared task data set on financial documents. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,855 |
inproceedings | lee-etal-2022-mnlp | {MNLP} at {F}in{C}ausal2022: Nested {NER} with a Generative Model | El-Haj, Mahmoud and Rayson, Paul and Zmandar, Nadhem | jun | 2022 | Marseille, France | European Language Resources Association | https://aclanthology.org/2022.fnp-1.24/ | Lee, Jooyeon and Pham, Luan Huy and Uzuner, {\"Ozlem | Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022 | 135--138 | This paper describes work performed for the FinCasual 2022 Shared Task {\textquotedblleft}Financial Document Causality Detection{\textquotedblright} (FinCausal 2022). As the name implies, the task involves extraction of casual and consequential elements from financial text. Our approach focuses employing Nested NER using the Text-to-Text Transformer (T5) generative transformer models while applying different combinations of datasets and tagging methods. Our system reports accuracy of 79{\%} in Exact Match comparison and F-measure score of 92{\%} token level measurement. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,856 |
inproceedings | wiriyathammabhum-2022-tedb | {TEDB} System Description to a Shared Task on Euphemism Detection 2022 | Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.flp-1.1/ | Wiriyathammabhum, Peratham | Proceedings of the 3rd Workshop on Figurative Language Processing (FLP) | 1--7 | In this report, we describe our Transformers for euphemism detection baseline (TEDB) submissions to a shared task on euphemism detection 2022. We cast the task of predicting euphemism as text classification. We considered Transformer-based models which are the current state-of-the-art methods for text classification. We explored different training schemes, pretrained models, and model architectures. Our best result of 0.816 F1-score (0.818 precision and 0.814 recall) consists of a euphemism-detection-finetuned TweetEval/TimeLMs-pretrained RoBERTa model as a feature extractor frontend with a KimCNN classifier backend trained end-to-end using a cosine annealing scheduler. We observed pretrained models on sentiment analysis and offensiveness detection to correlate with more F1-score while pretraining on other tasks, such as sarcasm detection, produces less F1-scores. Also, putting more word vector channels does not improve the performance in our experiments. | null | null | 10.18653/v1/2022.flp-1.1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,858 |
inproceedings | maimaitituoheti-etal-2022-prompt | A Prompt Based Approach for Euphemism Detection | Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.flp-1.2/ | Maimaitituoheti, Abulimiti and Yong, Yang and Xiaochao, Fan | Proceedings of the 3rd Workshop on Figurative Language Processing (FLP) | 8--12 | Euphemism is an indirect way to express sensitive topics. People can comfortably communicate with each other about sensitive topics or taboos by using euphemisms. The Euphemism Detection Shared Task in the Third Workshop on Figurative Language Processing co-located with EMNLP 2022 provided a euphemism detection dataset that was divided into the train set and the test set. We made euphemism detection experiments by prompt tuning pre-trained language models on the dataset. We used RoBERTa as the pre-trained language model and created suitable templates and verbalizers for the euphemism detection task. Our approach achieved the third-best score in the euphemism detection shared task. This paper describes our model participating in the task. | null | null | 10.18653/v1/2022.flp-1.2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,859 |
inproceedings | berger-2022-transfer | Transfer Learning Parallel Metaphor using Bilingual Embeddings | Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.flp-1.3/ | Berger, Maria | Proceedings of the 3rd Workshop on Figurative Language Processing (FLP) | 13--23 | Automated metaphor detection in languages other than English is highly restricted as training corpora are comparably rare. One way to overcome this problem is transfer learning. This paper gives an overview on transfer learning techniques applied to NLP. We first introduce types of transfer learning, then we present work focusing on: i) transfer learning with cross-lingual embeddings; ii) transfer learning in machine translation; and iii) transfer learning using pre-trained transformer models. The paper is complemented by first experiments that make use of bilingual embeddings generated from different sources of parallel data: We i) present the preparation of a parallel Gold corpus; ii) examine the embeddings spaces to search for metaphoric words cross-lingually; iii) run first experiments in transfer learning German metaphor from English labeled data only. Results show that finding data sources for bilingual embeddings training and the vocabulary covered by these embeddings is critical for learning metaphor cross-lingually. | null | null | 10.18653/v1/2022.flp-1.3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,860 |
inproceedings | alnajjar-etal-2022-ring | Ring That Bell: A Corpus and Method for Multimodal Metaphor Detection in Videos | Ghosh, Debanjan and Beigman Klebanov, Beata and Muresan, Smaranda and Feldman, Anna and Poria, Soujanya and Chakrabarty, Tuhin | dec | 2022 | Abu Dhabi, United Arab Emirates (Hybrid) | Association for Computational Linguistics | https://aclanthology.org/2022.flp-1.4/ | Alnajjar, Khalid and H{\"am{\"al{\"ainen, Mika and Zhang, Shuo | Proceedings of the 3rd Workshop on Figurative Language Processing (FLP) | 24--33 | We present the first openly available multimodal metaphor annotated corpus. The corpus consists of videos including audio and subtitles that have been annotated by experts. Furthermore, we present a method for detecting metaphors in the new dataset based on the textual content of the videos. The method achieves a high F1-score (62{\%}) for metaphorical labels. We also experiment with other modalities and multimodal methods; however, these methods did not out-perform the text-based model. In our error analysis, we do identify that there are cases where video could help in disambiguating metaphors, however, the visual cues are too subtle for our model to capture. The data is available on Zenodo. | null | null | 10.18653/v1/2022.flp-1.4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 25,861 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.