Binarybardakshat/SVLM
Text2Text Generation
•
Updated
•
3
paper_id
stringlengths 12
48
| title
stringlengths 12
155
| url
stringlengths 39
46
| abstract
stringlengths 389
2.11k
| ocr_markdown
stringlengths 18.1k
576k
|
---|---|---|---|---|
zhang-etal-2023-investigating | Investigating Glyph-Phonetic Information for {C}hinese Spell Checking: What Works and What{'}s Next? | https://aclanthology.org/2023.findings-acl.1 | While pre-trained Chinese language models have demonstrated impressive performance on a wide range of NLP tasks, the Chinese Spell Checking (CSC) task remains a challenge. Previous research has explored using information such as glyphs and phonetics to improve the ability of CSC models to distinguish misspelled characters, with good results at the accuracy level on public datasets. However, the generalization ability of these CSC models has not been well understood: it is unclear whether they incorporate glyph-phonetic information and, if so, whether this information is fully utilized. In this paper, we aim to better understand the role of glyph-phonetic information in the CSC task and suggest directions for improvement. Additionally, we propose a new, more challenging, and practical setting for testing the generalizability of CSC models. All code is made publicly available. | # Investigating Glyph-Phonetic Information For Chinese Spell Checking: What Works And What'S Next?
Xiaotian Zhang ∗
, Yanjun Zheng ∗
, Hang Yan, Xipeng Qiu †
Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University
{xiaotianzhang21, yanjunzheng21}@m.fudan.edu.cn {hyan19, xpqiu}@fudan.edu.cn
## Abstract
While pre-trained Chinese language models have demonstrated impressive performance on a wide range of NLP tasks, the Chinese Spell Checking (CSC) task remains a challenge. Previous research has explored using information such as glyphs and pronunciations to improve the ability of CSC models to distinguish misspelled characters, with good results at the accuracy level on public datasets. However, the generalization ability of these CSC models has not been well understood: it is unclear whether they incorporate glyph-phonetic information and, if so, whether this information is fully utilized. In this paper, we aim to better understand the role of glyph-phonetic information in the CSC task and suggest directions for improvement. Additionally, we propose a new, more challenging, and practical setting for testing the generalizability of CSC models. Our code will be released at https://github.com/piglaker/ConfusionCluster.
## 1 Introduction
Spell checking (SC) is the process of detecting and correcting spelling errors in natural human texts.
For some languages, such as English, SC is relatively straightforward, thanks to the use of tools like the Levenshtein distance and a well-defined vocabulary. However, for Chinese, Chinese spell checking (CSC) is a more challenging task, due to the nature of the Chinese language. Chinese has a large vocabulary consisting of at least 3,500 common characters, which creates a vast search space and an unbalanced distribution of errors (Ji et al.,
2021). Moreover, substitutions or combinations of characters can significantly alter the meaning of a Chinese sentence while still being grammatically correct. The CSC task, therefore, requires requires the output to retain as much of the original meaning and wording as possible. Figure 1 shows different
∗These two authors contributed equally.
†Corresponding author.
Figure 1: An example of different errors affecting CSC
results. red/green/blue represents the misspelled character, the expected correction and the unexpected correction.
types of errors and corresponding target characters.
Previous work has attempted to incorporate inductive bias to model the relationship between Chinese character glyphs, pronunciation, and semantics (Xu et al., 2021).
In recent years, pre-trained language models
(PLMs) have shown great success in a wide range of NLP tasks. With the publication of BERT (Devlin et al., 2018), using PLMs for CSC tasks has become a mainstream approach, with examples including FASpell (Hong et al., 2019), SoftmaskedBERT (Zhang et al., 2020), SpellGCN (Cheng et al.,
2020), and PLOME (Liu et al., 2021). Some researchers have focused on the special features of Chinese characters in terms of glyphs and pronunciations, aiming to improve the ability to distinguish misspelled characters by incorporating glyphphonetic information (Ji et al., 2021; Liu et al.,
2021; Xu et al., 2021). However, despite these advances, the generalization of CSC models to realworld applications remains limited. How can we improve the generalization ability of CSC models?
Can current models recognize and utilize glyphphonetic information to make predictions? As we re-examine previous work, we have identified some previously unexplored issues and potential future directions for research.
Q1: *Do existing Chinese pre-trained models encode the glyph-phonetic information of Chinese* characters? Chinese writing is morpho-semantic, and its characters contain additional semantic information. Before studying existing CSC models, we seek to investigate whether existing mainstream Chinese pre-trained language models are capable of capturing the glyph-phonetic information.
Q2: **Do existing CSC models fully utilize the**
glyph-phonetic information of misspelled characters to make predictions? Intuitively, introducing glyph-phonetic information in the CSC task can help identify misspelled characters and improve the performance of the model. However, there has been little research on whether existing CSC models effectively use glyph-phonetic information in this way.
Empirically, our main observations are summarized as follows:
- We show that Chinese PLMs like BERT encode glyph-phonetic information without explicit introduction during pre-training, which can provide insight into the design of future Chinese pre-trained models. We also propose a simple probe task for measuring how much glyph-phonetic information is contained in a Chinese pre-trained model.
- We analyze the ability of CSC models to exploit misspelled characters and explain why current CSC methods perform well on test sets but poorly in practice. We propose a new probe experiment and a new metric Correction with Misspelled Character Coverage Ratio (CCCR).
- We propose a new setting for the CSC task, called isolation correction, to better test the generalizability and correction performance of CSC models. This setting alleviates the shortcuts present in the original dataset, making the CSC task more challenging and realistic.
We hope that this detailed empirical study will provide follow-up researchers with more guidance on how to better incorporate glyph-phonetic information in CSC tasks and pave the way for new state-of-the-art results in this area.
## 2 Related Work 2.1 Glyph Information
Learning glyph information from Chinese character forms has gained popularity with the rise of deep neural networks. After word embeddings (Mikolov et al., 2013b) were proposed, early studies (Sun et al., 2014; Shi et al., 2015; Yin et al., 2016) used radical embeddings to capture semantics, modeling graphic information by splitting characters into radicals. Another approach to modeling glyph information is to treat characters as images, using convolutional neural networks (CNNs) as glyph feature extractors (Liu et al., 2010; Shao et al.,
2017; Dai and Cai, 2017; Meng et al., 2019). With pre-trained language models, glyph and phonetic information are introduced end-to-end. ChineseBERT(Sun et al., 2021) is a pre-trained Chinese NLP model that flattens the image vector of input characters to obtain the glyph embedding and achieves significant performance gains across a wide range of Chinese NLP tasks.
## 2.2 Phonetic Infomation
Previous research has explored using phonetic information to improve natural language processing
(NLP) tasks. Liu et al. propose using both textual and phonetic information in neural machine translation (NMT) by combining them in the input embedding layer, making NMT models more robust to homophone errors. There is also work on incorporating phonetic embeddings through pre-training.
Zhang et al. propose a novel end-to-end framework for CSC with phonetic pre-training, which improves the model's ability to understand sentences with misspellings and model the similarity between characters and pinyin tokens. Sun et al.
apply a CNN and max-pooling layer on the pinyin sequence to derive the pinyin embedding.
## 2.3 Chinese Spell Checking 2.3.1 Task Description
Under the language model framework, Chinese Spell Checking is often modeled as a conditional token prediction problem. Formally, let X =
c1, c2*, . . . , c*T be an input sequence with potentially misspelled characters ci. The goal of this task is to discover and correct these errors by estimating the conditional probability P(yi|X) for each misspelled character ci.
## 2.3.2 Csc Datasets
We conduct experiments on the benchmark SIGHAN dataset (Wu et al., 2013; Yu et al., 2014; Tseng et al., 2015), which was built from foreigners' writings and contains 3,162 texts and 461 types of errors. However, previous studies have reported poor annotation quality in SIGHAN13 and SIGHAN14 (Wu et al., 2013; Yu et al., 2014),
with many errors, such as the mixed usage of auxiliary characters, remaining unannotated (Cheng et al., 2020). To address these issues and enable fair comparisons of different models, we apply our probe experiment to the entire SIGHAN dataset and use only clean SIGHAN15 for metrics in our review. The statistics of the dataset are detailed in the appendix.
## 2.3.3 Methods For Csc
To investigate the role of glyph-phonetic information in CSC, we conduct a probe experiment using different Chinese PLMs as the initial parameters of the baseline. The models we use are detailed in the appendix. For our first probe experiment, we use the out-of-the-box BERT model as a baseline. We input the corrupted sentence into BERT and get the prediction for each token. If the predicted token for the corresponding output position is different from its input token, we consider BERT to have detected and corrected the error (Zhang et al., 2022).
We also consider two previous pre-trained methods that introduced glyph and phonetic information for CSC. PLOME (Liu et al., 2021) is a pre-trained masked language model that jointly learns how to understand language and correct spelling errors. It masks chosen tokens with similar characters according to a confusion set and introduces phonetic prediction to learn misspelled knowledge at the phonetic level using GRU networks. RealiSe (Xu et al., 2021) leverages the multimodal information of Chinese characters by using a universal encoder for vision and a sequence modeler for pronunciations and semantics.
## 2.4 Metrics
For convenience, all Chinese Spell Checking metrics in this paper are based on the sentence level score(Cheng et al., 2020). We mix the original SIGHAN training set with the enhanced training set of 270k data generated by OCR- and ASR-based approaches (Wang et al., 2018) which has been widely used in CSC task.
## 3 Experiment-I: Probing For Character Glyph-Phonetic Information
In this section, we conduct a simple MLP-based probe to explore the presence of glyph and phonetic information in Chinese PLMs and to quantify the extent to which tokens capture glyph-phonetic information. We consider glyph and phonetic information separately in this experiment.
## 3.1 Glyph Probe
For glyphs, we train a binary classifier probe to
![2_image_0.png](2_image_0.png)
predict if one character is contained within another character. We use the frozen embeddings of these characters from Chinese PLMs as input. That is, as shown in the upper part of Figure 2, if the probe is successful, it will predict that "称" contains a "尔" at the glyph level however not "产" (it is difficult to define whether two characters are visually similar, so we use this method as a shortcut).
For the glyph probe experiment, we consider the static, non-contextualized embeddings of the following Chinese PLMs: BERT (Cui et al.,
2019), RoBERTa (Cui et al., 2019), ChineseBERT (Sun et al., 2021), MacBERT (Cui et al.,
2020), CPT (Shao et al., 2021), GPT-2 (Radford et al., 2019), BART (Shao et al., 2021),
and T5 (Raffel et al., 2020). We also use Word2vec (Mikolov et al., 2013a) as a baseline and a completely randomized Initial embedding as a control. See Appendix C.1 for details on the models used in this experiment.
The vocabulary of different Chinese PLMs is similar. For convenience, we only consider the characters that appear in the vocabulary of BERT,
and we also remove the characters that are rare and too complex in structure. The details of our datasets for the probe are shown in Appendix C.2.
We divide the character w into character component {u1, u2*, . . . , u*i} using a character splitting tool1. That is, "称" will be divided into "禾" and "尔". The set of all characters (e.g. "称")
is W = {w1, w2*, . . . , w*d}, where d is number of characters. The set of all components of characters (e.g. "禾", "尔") is U = {u1, u2*, . . . , u*c},
where c is the number of components of each character. If ui exists in wi, in other words, is a component of wiin glyph level, then ui, wiis a positive example, and vice versa is a negative example.
Then, we constructed a positive dataset Dpos =
{{u1, w1}, {u2, w1}, . . . , {ui, wd}}, where the u corresponds to w separately. Also, we constructed a balanced negative dataset Dneg =
{{u n 1
, w1}, {u n 2
, w1}*, . . . ,* {u n i
, wd}}, where d is equal to Dpos and u nis randomly selected in the set U. We mix Dpos and Dneg and split the dataset into training and test according to the ratio of 80:20 to ensure that a character only appears on one side.
We train the probe on these PLMs' static nontrainable embeddings. For every ui, wi, we take the embedding of ui and wi, and concatenation them as the input xi. The classifier trains an MLP to predict logits yˆi, which is defined as :
## YˆI = Sigmoid(Mlp(Xi))
To control the variables as much as possible and mitigate the effects of other factors on the probe experiment, we also experimented with the number of layers of MLP. The results of this are detailed in Appendix C.3.
## 3.2 Phonetic Probe
For phonetics, we train another binary classifier probe to predict if two characters have the similar pronunciation, also using the frozen embeddings of these characters from Chinese PLMs as input. The meaning of 'similar' here is that the pinyin is exactly the same, but the tones can be different. That is, as shown in the lower part of Figure 2, if the probe is successful, it will predict that "称"*(cheng)* has the similar pronunciation with "程"*(cheng)* however not "产"*(chan)*. The pronunciation information for the Chinese characters comes from the pypinyin2toolkit.
1https://github.com/howl-anderson/hanzi_chaizi 2https://github.com/mozillazg/python-pinyin We consider static non-contextualized embedding of Chinese PLMs, which are the same as the glyph probe. We also mainly analyze the characters in the vocabulary of BERT, and mainly consider common characters.
The dataset construction is also similar to the glyph probe. To create positive examples, for each character wiin character list W, we find a character ui which has the similar pronunciation as wi, then ui, wiis a positive example. For each positive, we also find a character si which has a different pronunciation from wito construct negative example si, wi. For example, the positive example is the two characters with similar pronunciation, such as
"称" *(cheng)* and "程"*(cheng)*. And the negative example is the two characters with different pronunciation, such as "称"*(cheng)* and "产"*(chan)*.
The divide ratio and other settings are the same as the glyph probe.
We train the probe on these PLMs' static nontrainable embeddings as the glyph probe and also concatenate the embeddings of the pairs as input.
## 3.3 Results And Analysis
The following conclusions can be drawn from Figure 3.
The Chinese PLMs encoded the glyph information of characters From the results, we can see that for glyphs, all models outperform the control model. The results of the control are close to 50%
that there is no glyph information encoded in the input embedding, and the model guesses the result randomly. Comparing Word2vec and other Chinese PLMs side-by-side, we find that the large-scale pre-trained model has a significant advantage over Word2vec, suggesting that large-scale pre-training can lead to better representation of characters. In addition, we find that the results of these Chinese PLMs are concentrated in a small interval. ChineseBERT boasts of introducing glyph-phonetic information, which do not have advantages in glyph.
## Plms Can Hardly Distinguish The Phonetic Features Of Chinese Characters In Our Experiments,
the control group performed similarly to the phonetic probe, with an accuracy of approximately 50%. Unlike the glyph probe, the accuracy of Word2vec and other Chinese PLMs are also low in this probe. However, the introduction of phonetic embedding allowed ChineseBERT to perform significantly better than the other models. Our anal-
![4_image_0.png](4_image_0.png)
| Method | Acc. |
|---------------------|--------|
| Control | 0.485 |
| Word2vec | 0.634 |
| BERT | 0.752 |
| RoBERTa | 0.759 |
| ChineseBERT | 0.755 |
| BERT-trained | 0.756 |
| RoBERTa-trained | 0.757 |
| ChineseBERT-trained | 0.759 |
Model training on the CSC task does not enrich glyph and phonetic information We perform the same two probes using models fine-tuned on the SIGHAN dataset. We aim to investigate whether the training for the CSC task could add glyph and phonetic information to the embeddings, and the results are shown in Table 1. We found that the difference between the fine-tuned and untrained models is almost negligible, indicating that the relevant information is primarily encoded during the pre-training stage.
## 4 Experiment-Ii: Probing For Homonym Correction
In this experiment, we aim to explore the extent to which existing models can make use of the information from misspelled characters. To do this, we propose a new probe called Correction with Misspelled Character Coverage Ratio(CCCR),
which investigates whether the model can adjust its prediction probability distribution based on the glyph-phonetic information of misspelled characters when making predictions.
## 4.1 Correction With Misspelled Character Coverage Ratio
Measure models utilizing the misspelled characters In this paper, we propose a method to evaluate the ability of a model to make predictions using additional information from misspelled characters, as well as to assess whether the model contains glyph-phonetic information.
Assume that C is a combination set of all possible finite-length sentence Ciin the languages L, C = {C0, ..., Ci, ...}, Ci = {ci,1, ..., ci,n*, ...*},
while ci,j ∈ L. Let sentence C
n,a ibe C
n,a i =
{ci,1, ..., ci,n−1, a, ci,n+1*, ...*}, then assume that the representation learning model, let Hw(C) be the hiddens of model w, Xiis an example in C, For model w, the probability of token in position i should be:
![5_image_0.png](5_image_0.png)
## P (Yi = J|Xi, W) = Softmax (W Hw(Xi) + B) [J]
Dataset D is a subset of C, Then we can approximate the probability of the model. The CCCR is composed of MLM and *Homonym*. The former indicates which samples need the information on misspelled characters to be corrected while the latter shows which sample models adjust the output distribution. We take the intersection to get the frequency of whether the model is adjusted for the samples whose distribution should be adjusted.
MLM MLM is a subset of dataset D.
For all input sentence Ci ∈ D, Ci =
{c1, c2, [MASK]*, . . . , c*T } and the position of
[*MASK*] is spelling error, let special token mask = [MASK] , Ci ∈ MLM if:
$P\left(y_i=noise\Big|C_i^{n,mask},w\right)>P\left(y_i=Y_i\Big|C_i^{n,mask},w\right)$ $Homonym\quad$ Same to MLM, For input sentence.
Ci ∈ D, Ci = {c1, c2, cmisspelled*, . . . , c*T } and the position of c*misspelled* is spelling error. For all sentences Ciin the dataset D, Ci ∈ *Homonym* if:
$P(y_{i}=Y_{i}|C_{i}^{n,c_{misspelled}},w))>P(y_{i}=noise|C_{i}^{n,c_{misspelled}},w)$
Correction with Misspelled Character Coverage Ratio (CCCR) The measured ratio is used to describe the lower bound of the probability that the model uses the information of the misspelled characters for the sentences Ciin the dataset C.
$$C C C R={\frac{|\{C_{i}|C_{i}{\in}M L M{\land}C_{i}{\in}H o m o n y m\}|}{|\{C_{i}|C_{i}{\in}M L M\}|}}$$
Baseline Independently, we give an estimation method for the base value. Given model w, *noise*,
dataset D, ground truth correction y. The baseline of CCCR should be estimated as:
$$q u e s s_{i}=\frac{P(y_{i}=n o i s e|C_{i}^{n,m a s k},w)}{1-P(y_{i}=n o i s e|C_{i}^{n,m a s k},w)}$$
$$C C C R_{b a s e l i n e}={\frac{\sum_{i\in S}\left\{1*g u e s s_{i}\right\}}{\mid\left\{C_{i}\mid C_{i}\in M L M\right\}\mid}}$$
The baseline can be understood as a model with no glyph-phonetic information at all, and the probability of being able to guess the correct answer. But no such language model exists. For this purpose, instead of inputting the misspelled characters into the model, we artificially design strategies for the model to randomly guess answers by weight from the remaining candidates, which is equivalent to the probability of being able to guess correctly.
This probability is comparable to CCCR. CCCR
restricts the condition for y to overtake *noise*. In the case of baseline, considering rearranging the candidates, the probability of y overtaking noise can also be re-normalized by probability.
## 4.2 Isolation Correction Setting Experiment
In the previous section, we test CCCR on the model finetuned on the SIGHAN dataset then found the CCCR of the models approached 92%. The results are shown in Table 3. As shown in Table 4, we analyze the overlap of correction pairs in the training and test sets in the SIGHAN dataset.
To test the model generalization ability, we design Isolation Correction Task, which removes all overlapped pairs in the training set and duplicate pairs in the test set. With isolation, the training set is reduced by about 16%. We believe that such a setup can better test the generalizability of the model and is more challenging and practical.
Within the CCCR probe, We explore the ability of the model whether rely on its information, not just the ability to remember the content on the isolated SIGHAN dataset. The result is shown in Table 2
| Method | MLM | Homonym | CCCR | Precision | Recall | F1 |
|---------------------|-------|-----------|--------|-------------|----------|-------|
| Baseline | - | - | 15.61 | - | - | - |
| BERT-Initial | 45.58 | 64.87 | 34.57 | - | - | - |
| RoBERTa-Initial | 46.53 | 60.19 | 28.17 | - | - | - |
| ChineseBERT-Initial | 44.97 | 62.22 | 31.17 | - | - | - |
| BERT | 48.57 | 67.73 | 41.67 | 43.72 | 26.93 | 33.32 |
| RoBERTa | 48.70 | 64.80 | 36.12 | 39.82 | 27.14 | 32.27 |
| ChineseBERT | 46.33 | 67.39 | 40.32 | 42.56 | 27.26 | 33.23 |
| PLOME | 55.63 | 88.38 | 80.83 | 42.63 | 37.15 | 39.70 |
| ReaLiSe | 51.29 | 84.23 | 78.14 | 52.26 | 19.23 | 28.11 |
Table 2: Model performance in the isolation correction setting of SIGHAN15. '-Initial' means without any training.
| Method | MLM | Homonym | CCCR | Precision | Recall | F1 |
|-------------|-------|-----------|--------|-------------|----------|-------|
| Baseline | - | - | 15.61 | - | - | - |
| BERT | 52.64 | 95.78 | 92.1 | 70.15 | 75.46 | 72.71 |
| RoBERTa | 47.07 | 95.92 | 91.77 | 70.49 | 74.91 | 72.63 |
| ChineseBERT | 48.57 | 97.62 | 96.83 | 73.24 | 76.75 | 74.59 |
Table 4: The overlap of the correction pairs in the train and test sets and the statistics of the isolation SIGHAN
set.
| #Pairs Count | #sent | |
|-------------------------|---------|--------|
| Training Set | 23140 | 284196 |
| Test Set | 824 | 2162 |
| Training Set ∩ Test Set | 799 | - |
| Training Set ∪ Test Set | 23165 | - |
| Isolation Training Set | 20758 | 230525 |
| Isolation Test Set | 824 | 2162 |
Between CCCR and F1 score, the mismatch phenomenon we refer to as stereotype is observed. The correction pair remembered while training harms the generalization of models.
## 4.3 Results And Analysis
We conducted experiments on three generic Chinese PLMs, BERT, RoBERTa, and ChineseBERT,
and two CSC Models, PLOME, and Realise. We compare the metrics difference between the Initial model and the model after finetuning the isolation training set. The result is shown in Table 2.
CCCR and F1 values mismatch Our experimen-
![6_image_0.png](6_image_0.png)
tal results show that the CCCR and F1 values mismatch for CSC models. In the isolation training setting, we observed that the F1 values of PLOME and ReaLise are both significantly lower than their performance in Table 2, indicating that their ability to make correct predictions is primarily based on the memory of correction pairs in the training set. However, their CCCR values remained high, suggesting that they are able to discriminate glyphphonetic information but are not able to correct it effectively.
## Stereotype Harm The Generalization Ability Of The Model In Isolation Correction Experiments
These results suggest that the correction performance of the models is primarily dependent on their memory ability and that a strong reliance on memory can hinder generalization. The poor performance in the isolation setting indicates that none of the current methods generalize well, which presents a significant challenge for future CSC research. We recommend that future research in this field follow the isolation experiment setting to address this challenge.
## 5 Conclusion
In this paper, we have explored the role of glyphphonetic information from misspelled characters in Chinese Spell Checking (CSC). Based on our experimental results, we have reached the following conclusions:
- Current Chinese PLMs encoded some glyph information, but little phonetic information.
- Existing CSC models could not fully utilize the glyph-phonetic information of misspelled characters to make predictions.
- There is a large amount of overlap between the training and test sets of SIGHAN dataset, which is not conducive to testing the generalizability of the CSC model. We propose a more challenging and practical setting to test the generalizability of the CSC task.
Our detailed observations can provide valuable insights for future research in this field. It is clear that a more explicit treatment of glyph-phonetic information is necessary, and researchers should consider how to fully utilize this information to improve the generalizability of their CSC models.
We welcome follow-up researchers to verify the generalizability of their models using our proposed new setting.
## 6 Limitation 6.1 Limited Number Of Csc Models Tested
During our research, we encountered difficulties in reproducing previous models due to unmaintained open source projects or the inability to reproduce the results claimed in the papers. As a result, we are unable to test all of the available models.
## 6.2 Limited Datasets For Evaluating Model Performance
There are currently few datasets available for the CSC task, and the mainstream SIGHAN dataset is relatively small. The limited size of the data used to calculate the metrics may not accurately reflect the performance of the models. Furthermore, we found that the quality of the test set is poor, the field is narrow, and there is a large gap between the test set and real-world scenarios.
## Acknowledgments
This work was supported by the National Key Research and Development Program of China
(No.2020AAA0106700) and National Natural Science Foundation of China (No.62022027). We would like to express our gratitude to all the reviewers for their diligent, careful, and responsible feedback.
## References
Xingyi Cheng, Weidi Xu, Kunlong Chen, Shaohua Jiang, Feng Wang, Taifeng Wang, Wei Chu, and Yuan Qi. 2020. Spellgcn: Incorporating phonological and visual similarities into language models for chinese spelling check. *arXiv preprint arXiv:2004.14166*.
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting pretrained models for chinese natural language processing. *arXiv preprint arXiv:2004.13922*.
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, and Ziqing Yang. 2019. Pre-training with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101.
Falcon Dai and Zheng Cai. 2017. Glyph-aware embedding of Chinese characters. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, pages 64–69, Copenhagen, Denmark.
Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*.
Yuzhong Hong, Xianguo Yu, Neng He, Nan Liu, and Junhui Liu. 2019. Faspell: A fast, adaptable, simple, powerful chinese spell checker based on dae-decoder paradigm. In *Proceedings of the 5th Workshop on* Noisy User-generated Text (W-NUT 2019), pages 160–
169.
Tuo Ji, Hang Yan, and Xipeng Qiu. 2021. Spellbert:
A lightweight pretrained model for chinese spelling check. In *Proceedings of the 2021 Conference on* Empirical Methods in Natural Language Processing, pages 3544–3551.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.
arXiv preprint arXiv:1910.13461.
Chao-Lin Liu, Min-Hua Lai, Yi-Hsuan Chuang, and Chia-Ying Lee. 2010. Visually and phonologically similar characters in incorrect simplified Chinese words. In *Coling 2010: Posters*, pages 739–747, Beijing, China. Coling 2010 Organizing Committee.
Hairong Liu, Mingbo Ma, Liang Huang, Hao Xiong, and Zhongjun He. 2019. Robust neural machine translation with joint textual and phonetic embedding. In *Proceedings of the 57th Annual Meeting of* the Association for Computational Linguistics, pages 3044–3049, Florence, Italy. Association for Computational Linguistics.
Shulin Liu, Tao Yang, Tianchi Yue, Feng Zhang, and Di Wang. 2021. Plome: Pre-training with misspelled knowledge for chinese spelling correction. In *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th* International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2991–
3000.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Yuxian Meng, Wei Wu, Fei Wang, Xiaoya Li, Ping Nie, Fan Yin, Muyu Li, Qinghong Han, Xiaofei Sun, and Jiwei Li. 2019. Glyce: Glyph-vectors for chinese character representations. *Advances in Neural Information Processing Systems*, 32.
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality.
Advances in neural information processing systems, 26.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. *OpenAI* blog, 1(8):9.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. *J. Mach. Learn. Res.*, 21(140):1–67.
Yan Shao, Christian Hardmeier, Jörg Tiedemann, and Joakim Nivre. 2017. Character-based joint segmentation and POS tagging for Chinese using bidirectional RNN-CRF. In *Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)*, pages 173–183, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, and Xipeng Qiu.
2021. Cpt: A pre-trained unbalanced transformer for both chinese language understanding and generation. *arXiv preprint arXiv:2109.05729*.
Xinlei Shi, Junjie Zhai, Xudong Yang, Zehua Xie, and Chao Liu. 2015. Radical embedding: Delving deeper to Chinese radicals. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2:
Short Papers), pages 594–598, Beijing, China. Association for Computational Linguistics.
Yaming Sun, Lei Lin, Nan Yang, Zhenzhou Ji, and Xiaolong Wang. 2014. Radical-enhanced chinese character embedding. In International Conference on Neural Information Processing, pages 279–286.
Springer.
Zijun Sun, Xiaoya Li, Xiaofei Sun, Yuxian Meng, Xiang Ao, Qing He, Fei Wu, and Jiwei Li. 2021. ChineseBERT: Chinese pretraining enhanced by glyph and Pinyin information. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:
Long Papers), pages 2065–2075, Online. Association for Computational Linguistics.
Yuen-Hsien Tseng, Lung-Hao Lee, Li-Ping Chang, and Hsin-Hsi Chen. 2015. Introduction to SIGHAN 2015 bake-off for chinese spelling check. In *Proceedings* of the Eighth SIGHAN Workshop on Chinese Language Processing, SIGHAN@IJCNLP 2015, Beijing, China, July 30-31, 2015, pages 32–37. Association for Computational Linguistics.
Dingmin Wang, Yan Song, Jing Li, Jialong Han, and Haisong Zhang. 2018. A hybrid approach to automatic corpus generation for chinese spelling check.
In *Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing*, pages 2517–2527.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pages 38–45.
Shih-Hung Wu, Chao-Lin Liu, and Lung-Hao Lee.
2013. Chinese spelling check evaluation at SIGHAN
bake-off 2013. In *Proceedings of the Seventh* SIGHAN Workshop on Chinese Language Processing, SIGHAN@IJCNLP 2013, Nagoya, Japan, October 14-18, 2013, pages 35–42. Asian Federation of Natural Language Processing.
Heng-Da Xu, Zhongli Li, Qingyu Zhou, Chao Li, Zizhen Wang, Yunbo Cao, Heyan Huang, and XianLing Mao. 2021. Read, listen, and see: Leveraging 9 multimodal information helps chinese spell checking.
arXiv preprint arXiv:2105.12306.
Para 1 Para 2 Para 3 Precision Recall F1 Precision Recall F1 Precision Recall F1 SIGHAN14 BERT 65.7 68.7 67.2 65.3 70.1 67.6 60.2 63.7 61.9 RoBERTa 64.9 69.3 67.1 64.0 67.6 65.7 58.8 64.9 62.7 ChineseBERT 63.5 68.2 65.7 62.1 66.6 64.3 65.5 70.3 67.8 SIGHAN15 BERT 74.1 78.4 76.2 71.8 76.9 74.3 70.1 72.6 71.3 RoBERTa 73.9 78.0 75.9 71.9 76.0 74.9 68.0 73.8 70.7 ChineseBERT 73.3 78.5 75.8 72.4 77.4 74.8 73.2 76.7 74.9 Rongchao Yin, Quan Wang, Peng Li, Rui Li, and Bin Wang. 2016. Multi-granularity Chinese word embedding. In *Proceedings of the 2016 Conference* on Empirical Methods in Natural Language Processing, pages 981–986, Austin, Texas. Association for Computational Linguistics.
Liang-Chih Yu, Lung-Hao Lee, Yuen-Hsien Tseng, and Hsin-Hsi Chen. 2014. Overview of SIGHAN 2014 bake-off for chinese spelling check. In *Proceedings* of The Third CIPS-SIGHAN Joint Conference on Chinese Language Processing, Wuhan, China, October 20-21, 2014, pages 126–132. Association for Computational Linguistics.
Ruiqing Zhang, Chao Pang, Chuanqiang Zhang, Shuohuan Wang, Zhongjun He, Yu Sun, Hua Wu, and Haifeng Wang. 2021. Correcting chinese spelling errors with phonetic pre-training. In *Findings of* the Association for Computational Linguistics: ACLIJCNLP 2021, pages 2250–2261.
Shaohua Zhang, Haoran Huang, Jicong Liu, and Hang Li. 2020. Spelling error correction with soft-masked bert. *arXiv preprint arXiv:2005.07421*.
Xiaotian Zhang, Hang Yan, Sun Yu, and Xipeng Qiu. 2022. Sdcl: Self-distillation contrastive learning for chinese spell checking. arXiv preprint arXiv:2210.17168.
Zhe Zhao, Hui Chen, Jinbin Zhang, Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, and Xiaoyong Du. 2019. Uer: An open-source toolkit for pre-training models. *EMNLP-IJCNLP 2019*, page 241.
| Training Set | #Sent | Avg. Length | #Errors |
|----------------|---------|---------------|-----------|
| SIGHAN14 | 3,437 | 49.6 | 5,122 |
| SIGHAN15 | 2,338 | 31.3 | 3,037 |
| Wang271K | 271,329 | 42.6 | 381,962 |
| Total | 277,104 | 42.6 | 390,121 |
| Test Set | #Sent | Avg. Length | #Errors |
| SIGHAN14 | 1,062 | 50.0 | 771 |
| SIGHAN15 | 1,100 | 30.6 | 703 |
| Total | 2,162 | 40.5 | 1,474 |
## A The Statistic Of Sighan Dataset
Table 5: Statistics of the SIGHAN datasets.
Table 6: All results for fine-tuning pre-trained models in raw data.
## B The Experimental Results Of Different Parameters
In Experiment I, we use the average of three sets of training parameters as the final result, which is due to the large fluctuation of performance on the test set during the experiment.
We use the pre-trained weight realized by (Cui et al., 2020). For all of our models, we use the AdamW optimizer (Loshchilov and Hutter, 2019)
to optimize our model for 20 epochs, the learning rate is set to be 5e-5, the batch size is 48 and the warm-up ratio is set to be 0.3.
## C Probe Details
Our implementation uses PyTorch(Paszke et al.,
2019) and HuggingFace(Wolf et al., 2020). The probes for each MLP are trained separately starting with random initialization weights. We train the probe via a binary classification task, using the Adam optimizer and Cross Entropy Loss.
## C.1 Plms Considered
We selected several mainstream Chinese PLMs as our research objects, along with their model card on Huggingface:
BERT-Chinese (Cui et al., 2019) consists of two pre-training tasks: Masked Language Model
(MLM) and Next Sentence Prediction (NSP), and introducing a strategy called whole word masking (wwm) for optimizing the original masking in the MLM task. We consider the base model with 110 Million parameters. Model Card:'hfl/chinesebert-wwm-ext' under Joint Laboratory of HIT and iFLYTEK Research.
RoBERTa-Chinese (Cui et al., 2019) removes the next sentence prediction task and uses dynamic masking in the MLM task. We also consider the base model. Model Card:'hfl/chinese-robertawwm-ext' under Joint Laboratory of HIT and iFLYTEK Research.
ChineseBERT (Sun et al., 2021) proposes to integrate the glyph-phonetic information of Chinese characters into the Chinese pre-training model to enhance the ability to model the Chinese corpus. We consider the base model. Model Card:'junnyu/ChineseBERT-base' under Joint Laboratory of HIT and iFLYTEK Research.
MacBERT (Cui et al., 2020) suggests that
[*MASK*] token should not be used for masking, but similar words should be used for masking because [*MASK*] has rarely appeared in the finetuning phase. We also consider the base model.
Model Card:'hfl/chinese-macbert-base' under Joint Laboratory of HIT and iFLYTEK Research.
CPT (Shao et al., 2021) proposes a pre-trained model that takes into account both understanding and generation. Adopting a single-input multipleoutput structure, allows CPT to be used flexibly in separation or combination for different downstream tasks to fully utilize the model potential. We consider the base model. Model Card:'fnlp/cpt-base' under Fudan NLP.
BART-Chinese (Lewis et al., 2019; Shao et al.,
2021) proposes a pre-training model that combines bidirectional and autoregressive approaches. BART
first uses arbitrary noise to corrupt the original text and then learns the model to reconstruct the original text. In this way, BART not only handles the text generation task well but also performs well on the comprehension task. We consider the base model.
Model Card:'fnlp/bart-base-chinese' under Fudan NLP.
T5-Chinese (Raffel et al., 2020; Zhao et al.,
2019) leverages a unified text-to-text format that treats various NLP tasks as Text-to-Text tasks, i.e.,
tasks with Text as input and Text as output, which attains state-of-the-art results on a wide variety of NLP tasks. We consider the base model. Model Card:'uer/t5-base-chinese-cluecorpussmall' under UER.
## C.2 The Statistics Of Probe Dataset
We remove some rare characters for two reasons.
Firstly, these characters are rarely encountered as misspellings in CSC task. Secondly, these characters appeared infrequently in the training corpus of the PLMs, which we believe would make it excessively challenging for the PLMs to learn effectively.
The statistics are shown in Table 7 and Table 8.
## C.3 Probing Results From Models With Different Numbers Of Mlp Layers
From the experimental results, it can be seen that the number of layers of MLP has little effect on the results, and most of the results of the pre-training
| #Pos. | #Neg. | #Total | |
|--------------|---------|----------|-------|
| Training Set | 7968 | 7968 | 15936 |
| Test Set | 1992 | 1992 | 3984 |
Table 7: The statistics of the dataset for the glyph probe.
| #Pos. | #Neg. | #Total | |
|--------------|---------|----------|-------|
| Training Set | 8345 | 8345 | 16690 |
| Test Set | 2087 | 2087 | 4174 |
Table 8: The statistics of the dataset for the phonetic probe.
models are finally concentrated in the interval of
![10_image_0.png](10_image_0.png)
0.75-0.76. The Chinese pre-training models of the BERT family are slightly less effective when the number of layers is relatively small and similar to other Chinese pre-training models after more than three layers.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
A1. Did you describe the limitations of your work?
Left blank.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
Left blank.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
Left blank.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
Left blank.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
Left blank.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
Left blank.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
Left blank.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
Left blank.
## C **Did You Run Computational Experiments?**
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
Left blank.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Left blank.
C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Left blank.
D **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
Left blank.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)?
Left blank.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
Left blank.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
Left blank.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
Left blank. |
jo-2023-self | A Self-Supervised Integration Method of Pretrained Language Models and Word Definitions | https://aclanthology.org/2023.findings-acl.2 | We investigate the representation of pretrained language models and humans, using the idea of word definition modeling{--}how well a word is represented by its definition, and vice versa. Our analysis shows that a word representation in pretrained language models does not successfully map its human-written definition and its usage in example sentences. We then present a simple method DefBERT that integrates pretrained models with word semantics in dictionaries. We show its benefits on newly-proposed tasks of definition ranking and definition sense disambiguation. Furthermore, we present the results on standard word similarity tasks and short text classification tasks where models are required to encode semantics with only a few words. The results demonstrate the effectiveness of integrating word definitions and pretrained language models. | # A Self-Supervised Integration Method Of Pretrained Language Models And Word Definitions
Hwiyeol Jo NAVER Search US
[email protected]
## Abstract
We investigate the representation of pretrained language models and humans, using the idea of word definition modeling–how well a word is represented by its definition, and vice versa. Our analysis shows that a word representation in pretrained language models does not successfully map its human-written definition and its usage in example sentences. We then present a simple method DefBERT that integrates pretrained models with word semantics in dictionaries. We show its benefits on newly-proposed tasks of definition ranking and definition sense disambiguation. Furthermore, we present the results on standard word similarity tasks and short text classification tasks where models are required to encode semantics with only a few words. The results demonstrate the effectiveness of integrating word definitions and pretrained language models.1
## 1 Introduction
A word embedding vector maps a word into a fixeddimensional vector as a distributed representation.
The word vectors are trained by looking at their context words and aggregating their representations in supervised ways (Turney, 2013) or unsupervised ways (Mikolov et al., 2013; Pennington et al.,
2014). More recently, the representations have been learned as a form of pretrained language models (Peters et al., 2018; Devlin et al., 2019). The huge success of these pretrained language models on various NLP tasks is achieved by capturing a rich semantic representation of words from their context in huge data.
On the other hand, for centuries, lexicographers and linguists have created dictionaries that contain general definitions of words and examples of their usage. With these sophisticated data, there have been many applications for NLP tasks (e.g., machine translation (Hill et al., 2016), semantic 1https://github.com/hwiyeoljo/DefBERT
Distances between word '**love**' and its definitions:
1. An intense feeling of deep affection. (57.8)
A feeling of deep romantic or sexual attachment to someone.
(139.8)
2. Affectionate greetings conveyed to someone on one's behalf. (126.6)
3. A formula for ending an affectionate letter. (64.9) 4. A personified figure of love, often represented as Cupid.
(149.0)
5. A great interest and pleasure in something. (66.0) 6. A person or thing that one loves. (103.7) 7. A friendly form of address. (44.9)
8. Used in affectionate requests. (93.9)
(in tennis, squash, and some other sports) a score of zero; nil.
(117.5)
9. Feel deep affection for (someone) (85.4)
10. Feel a deep romantic or sexual attachment to (someone)
(191.5)
11. Like or enjoy very much. (71.3)
The closest definition to the word '**love**': "**Several.**" (definition of word '**number**') (27.3)
Table 1: The mean squared distance between the word
'love' and its definitions in a dictionary (top; |Wi-Dwi|),
and the closest distance between the word and any definitions in our collected dictionary (bottom; |Wi-Dwj|).
Each word or definition is embedded by BERT (see §3).
relatedness classification (Bahdanau et al., 2017)).
Some recent works have used WordNet (Miller, 1995) for fine-tuning BERT for word sense disambiguation (Huang et al., 2019; Guo et al., 2020),
whereas our work uses up-to-date dictionary definitions and usage examples to fine-tune pretrained language models.
In this work, we study the difference between machine-learned definitions and *human-written* definitions. Table 1 shows the mean squared distance between the vanilla BERT representation (the last hidden layer of [CLS]) for the word 'love' and the sentence representation (by [CLS]) for its definitions in dictionaries. The closest word of 'love' in the pretrained model is 'number' in our data collection. This indicates a potential risk of using pretrained representations as the only means to measure the semantic similarity between words or short sentences, where the context words are insufficient to get good representations.
Furthermore, it is important to make general and self-indicated embeddings. For example, if we do not have pooling layers to the pretrained embedding, we need additional training data to fine-tune the pooling layer and the pretrained model. On the other hand, if we can do the same task by using the pretrained model only (without fine-tuning), this means good generalization.
Lastly, some researchers believe that target word token representation is better than [CLS] token when the input text is short. However, we do not know 'what the short text is' or 'when the model gets short text as inputs.' Thus, the fact that we can use [CLS] token for a single word or short text is beneficial in that we do not need to consider the input length. To do so, we attempt to inject worddefinition-example (its usage) information into the model.
To overcome the deficiency and get such a generalized model, we propose a new joint representation that combines the human-written word definition with its usage example in a dictionary entry.
We show the effectiveness of this new representation on several downstream tasks.
The main contributions are:
- Performed extensive analyses of how close the representations of pretrained language models are to the one of collected human-written definitions; our analyses show that the representations of BERT do not reflect the humanwritten definitions.
- Incorporated the dictionary definitions into the pretrained language models in embeddinglevel–As a new model called DefBERT (§4),
showing significant performance improvements where tasks lack contextual information.
- Proposed two semantics-related ranking tasks:
DefRank aims to find the correct definition given the word, and SenseRank is to find the proper sense from a word's definitions given the word's usage. Unsurprisingly but interestingly, DefBERT shows significant improvements in both tasks.
## 2 Related Work
Using dictionaries for NLP tasks. Dict2vec (Tissier et al., 2017) learned word embeddings through word-definition pairs. They designed strong and weak word pairs within dictionaries and made the word pairs close. Bahdanau et al. (2017) utilized dictionaries to solve out-of-vocabulary (OOV) problems by encoding the definitions of OOV words and generating the word's embeddings. Hill et al. (2016) suggested a dictionary-based learning task using neural networks. They also suggested reversed dictionary evaluation tasks that choose the most related word to a given description. Like dictionaries, WordNet (Miller, 1995) has been widely used to enrich word representations (Faruqui et al., 2015).
However, the prior works were biased to inject relation knowledge, such as synonyms, rather than general word definitions.
More recently, GlossBERT (Huang et al., 2019)
used definitions for disambiguation tasks, but the approach needs context-gloss pairs and a classifier even at inference. In this work, we attempt to build a generalized model which does not require additional classifiers.
Definition Modeling. The definition modeling task was proposed by Noraset et al. (2017) that generates a word definition from the word's embedding.
The authors considered the definition modeling as a special case of language modeling and used it for word embedding evaluation. However, Gadetsky et al. (2018) found that the prior definition modeling tasks could not resolve word disambiguation because it is conditioned on only a single word. To address the issue, they also extended Noraset et al.
(2017)'s model to process context.
Chang and Chen (2019) investigated whether contextual representations can capture word definitions. Unlike the prior works on definition modeling, they suggested a general framework that maps the contextualized representation into a definition embedding space and then selects top-N closest definitions. This retrieval-based approach can resolve the problems in the generative approach of definition modeling, such as the difficulty in evaluation.
The major differences between the prior works and our study are as follows: First, we compare representations from pretrained language models and definitions from a lexical dictionary at embeddinglevel. Second, we use word-definition pairs and definition-example pairs from the dictionary. The use of words in a sentence is similar to GlossBERT,
but its objective is not to make definition-injected
| Chang and | Oxford+ | |
|-------------------------|-----------|-----------|
| Chen (2019) | (Ours) | |
| # Words (W) | 31,889 | 30,533 |
| # Definition (Def) | 79,105 | 93,227 |
| # Examples (Exam) | 707,001 | 1,167,055 |
| Avg./Max. # Def by Word | 10.6/65 | 10.5/51 |
| Avg./Max. # Exam by Def | 17.8/46 | 18.0/85 |
| Sense order | N | Y |
Table 2: Comparison of dictionary datasets. We build on and augment the prior work. The differences in the number of words, definitions, and examples are due to updates.
representation. Rather it is to solve sense disambiguation tasks. The method also requires an additional classifier. Lastly, we propose two tasks that can measure the capability of model representations on human-written definitions (and examples): DefRank and SenseRank. Compared to other benchmark datasets that predict how similar two words or sentences are, we expect these tasks to be a straightforward benchmark.
## 3 Preliminary Analyses
The central motivation behind our analysis is to check whether a word representation in pretrained language models (in this work, BERT) can indicate the representation of its definition and vice versa.
## 3.1 Definition Dataset Collection: O**Xford+**
In prior work, Chang et al. (2018); Chang and Chen (2019) collected an online dictionary from lexico2(Oxford University Press, 2020). Since our work requires up-to-date definitions, we recollected the dataset based on the vocabulary of the original work.
Table 2 shows the comparison and statistics of the dictionary data. The number of unique vocabulary is slightly different from the previous one.
However, when considering the number of definitions and the number of examples increases, we think that the difference is due to the updates of lexico dictionary. Dictionaries usually order word senses by how frequently the senses are used, so the order information is important for investigating major versus minor definitions. Due to the more extensive coverage of usage and definitions, and additional information, we call our dataset Oxford+.
From Oxford+, we take two sets of pairs and calculate the distances: one is a pair between a word and its definition (W-D), and the other is a pair between a definition and its usage (D-E) where the pairs are embedded by pretrained language model.
## 3.2 Distance Measures
Embedding scheme. We use bert-base-uncased in HuggingFace (Wolf et al., 2019) as a backbone model.3 Although there are several different ways to represent a word or sentence using BERT (e.g.,
averaging [CLS] in every hidden layer, concatenating [CLS], etc.), we use the [CLS] token in the last hidden layer, as the original BERT paper proposed.
For all definition-example pairs, we first input the example through BERT and then use the target word tokens in the example instead of using the
[CLS] token (see Figure 1). We average their vectors if the target word is tokenized by more than one token.
Let i be the word index, i j be the definition index of the j-th definition of the i-th word, and *i jk* be the index of the k-th example of the j-th definition for word i. Following our central motivation, the distance |Wi − Di j| between a word Wi and one of its definitions Di j is calculated by mean squared distance. Likewise, the distance |Di j −E*i jk*| between a word used in an example E*i jk* and its definition Di j is calculated by mean squared distance.4 In order to compare BERT's ability to capture human-written definitions, we need to control BERT's inputs and weights. We thus use (1) [PAD]
masked inputs on the target word and (2) BERT
with random weights. or example, suppose the embedding of empty input (BERT([PAD] ... [PAD]))
is closer to the definition embedding (BERT(D))
than a single word embedding (BERT(W))). In that case, BERT does not seem to capture definition information through the inputs. The controls by [PAD]
will be denoted as W[PAD] for word and E[PAD] for usage example, respectively. The [PAD] controlled inputs are also illustrated in Figure 1. Likewise, if BERT with random weights performs better, BERT's pretrained weights do not have information about human-written definitions. We denote the controlled model as Rand. With this idea, we can define distance types.
![3_image_0.png](3_image_0.png)
![3_image_1.png](3_image_1.png)
For each word-definition pair Distance Types.
and each definition-example pair,
- |W–D| : computes the distance between the original input vector ( INPUT in Figure 1 ) and the definition vector for each layer.
- [W[PAD] -D| : computes the distance between the padded input vector ( PAD INPUT in Figure 1 )
and the definition vector for each layer.
- |Rand W-D| : is the same as |W-D|, but all the model weights are randomly initialized.
- |D-E| : computes the distance between the definition vector and the target word vector
motivation since the pair does not use definition.
used in the example sentence.
- [D-E[PAD] I : computes the distance between the definition vector and the padded target word vector in the example.
- |Rand D-E| : is the same as |D-E|, but all the model weights are randomly initialized.
To sum up, the padded inputs and the randomized weights are used to contaminate the model representation. If the contaminated embeddings are closer to definitions than the vanilla input or model embeddings, the model representation is not meaningful.
## 3.3 Findings
Distribution of Distances.
We visualize the distances between a target word 'love' and all definitions in Oxford+ (Figure 2). As we showed in
§ 1, the closest (or most similar) word to 'love' was
'number'. The definitions of 'love' are scattered over the distribution, indicating how BERT's representation of 'love' is far from its human-written definitions. We observe similar patterns in most words in the dictionary.
The pretrained representation of a word alone is not indicating its human-written definitions. Figure 3 (top) shows the averaged word-definition distances according to hidden layers. The distance of [W-D] is smaller than [W[PAD] -D] distance across all the hidden layer depth. Since the difference between them is only the input, the word itself includes information about the human-written definitions definition. In the same plot, however, the distance of randomized BERT |Rand W-D| is much lower than |W-D| and |W[PAD]-D| at upper layers, which casts doubt about BERT's pretrained weights whether they can represent human-written definitions. We thus conjecture that using a word alone is not appropriate for a contextualized representation since a single word lacks context.
To provide more context for the model, we conduct a second experiment to compare the definition's representation to its usage in the example sentence where pretrained language models have shown strong performances.
BERT can self-indicate better by using surrounding words but it still fails to capture the human-written definitions. Figure 3 (bottom)
shows the definition-example distances. The distances of |D-E| and |D-E[PAD]| shows similar trends but |D-E[PAD]| is smaller at the last hidden layer. The result shows the tokens are less selfindicated in the sentences, while the averaged distance of the randomized model is much smaller than in ordinary settings.
From this analysis, the pretrained language model (especially BERT) seems unable to encode human-written definitions, as |Rand W-D| and |Rand D-E| show lower distance than |W-D| and
![4_image_0.png](4_image_0.png)
|D-E|, respectively. Also, the distances between the vanilla BERT and the padded models are small, which tells us that it might have potential benefits by adding semantic information.
## 4 Defbert**: Definition Induced Bert**
Using lexical resources for fine-tuning word embeddings is a typical solution to take advantage of both lexical semantics and distributional semantics.
However, as seen in §3, the lexical relations, such as antonyms and synonyms, are unnatural to be integrated with pretrained language models. On the other hand, dictionary definitions and examples are expressed as complete sentences, leading to better settings for optimizing the pretrained models.
Based on the analysis (§3), we present a simple yet effective method to integrate general definitions from a dictionary with pretrained representations while keeping the nature of contextualization. The setup of BERT for fine-tuning is the same as Figure 1; we then fine-tune BERT using the distances as a loss function.
By doing so, we optimize BERT's representation to be close to its human-written definitions (W-D)
and its word representation used in the examples
(D-E). The loss functions used for each pair are as
follows: $$\begin{array}{l}\mbox{L}_{\#-\mbox{D}}=\frac{1}{\#W\times\#D}\sum_{i}\sum_{j}\sqrt{(\mathbb{W}_{i}-\mathbb{D}_{ij})^{2}}\\ \mbox{L}_{\mathbb{D}-\mbox{E}}=\frac{1}{\#W\times\#D\times\#E}\sum_{i}\sum_{j}\sum_{k}\sqrt{(\mathbb{D}_{ij}-\mathbb{E}_{ijk})^{2}}\\ \end{array}\tag{2}$$
where i is the word index, j is the definition index of the j-th definition of the i-th word, and *i jk* is the index of the k-th example of the j-th definition for word i. The number of words, definitions, and examples are denoted as \#W, \#D, and \#E, respectively. We use Adam optimizer (Kingma and Ba, 2014) with learning rate 5e-6, and 32 batch size.
The maximum token length from our definition data is 191, including special tokens (e.g., [CLS]
and [SEP]), but we utilize the model's maximum capacity, which is 512.
However, as we observed in the analysis (§3), the pretrained embeddings of source and target words
(i.e., W, D, and E) might not be appropriate to be trained. Therefore, we additionally design loss functions, which utilize the other dictionary information: the distance between [CLS] token of W and Easy set: target word "love" B D
C
∗1affectionate greetings conveyed to someone on one's behalf.
4 1 C2 persist in an activity or process. 1 3 C3 a device for reducing mechanical vibration, in particular a shock absorber on a motor vehicle.
2 4
| C4 | denoting popular black culture in general. | 3 | 2 |
|------------------------------------|----------------------------------------------|-----|-----|
| Challenge set: target word "love" | B D | | |
| C ∗ 1 | feelings of deep affection. | 4 | 1 |
| C2 | regarded with deep affection. [dear] | 2 | 4 |
| C3 | inspiring affection. [endearing] | 1 | 3 |
| C4 | deep love and respect. [adoration] | 3 | 2 |
| Neologism set: target word "ohana" | B D | | |
C
∗1feelings of deep affection. 4 1
C2 regarded with deep affection. [dear] 2 4 C3 inspiring affection. [endearing] 1 3
C4 deep love and respect. [adoration] 3 2
Neologism set: target word "ohana" B D
C
∗1especially in hawaii: a family, including members of an extended family, as well as close
friends and associates.
4 1 C2 a trouser leg. 1 4 C3 absence of difficulty or effort. 2 3 C4 an estimation of the quality or worth of someone or something.
3 2
Table 3: Examples in DefRank easy (top), challenge
(middle), and neologism (bottom) set. * indicates the gold definition. B and D mean the rank predicted by BERT and DefBERT, respectively.
W tokens itself (W') to align the token embedding(s)
to [CLS] token. Likewise, the distance between W and E is used.
and $\mathbf{L}_{\mathbf{R}-\mathbf{W}}$, $=\dfrac{1}{\#W}\sum_{i}\sqrt{(\mathbf{W}_{i}-\mathbf{W}^{\prime}\,_{i})}$ (3) $\mathbf{L}_{\mathbf{R}-\mathbf{E}}=\dfrac{1}{\#W\times\#D\times\#E}\sum_{i}\sum_{j}\sum_{k}\sqrt{(\mathbf{W}_{i}-\mathbf{E}_{ijk})^{2}}$
We use the additional loss functions for calibration of DefBERT. As a result, we can provide all the information in the dictionary self-supervised way.
In the training process, we prepare two BERT
models in order to make the training fast and keep BERT's original properties; One of BERT model makes prediction and update its weights by the loss(es), while the other BERT model only makes prediction used for target embedding. The target BERT is copied in every epoch. After the training, the fine-tuned BERT is selected.
## 5 Experiments 5.1 Defrank: Definition Ranking Task
Setup. To evaluate the ability of pretrained word vectors to capture human-written definitions at embedding-level (i.e., without classifiers), we present a task called Definition Ranking (DefRank).
Given a word, the model predicts the closest word definition among four candidate definitions. The main idea is similar to Chang and Chen (2019),
but DefRank looks at only a word and does not require additional mapping function in the evaluation framework, which corresponds to our goal–get a general embedding model. We assign approximately 10% of data to test set5.
DefRank has two sets based on task difficulty:
Easy set and Challenge set. The candidate definitions in the easy set are randomly sampled from Oxford+. On the other hand, candidate definitions in the challenge set are selected by the closest three definitions except for the gold definitions. We use Sentence-BERT (Reimers and Gurevych, 2019) to choose similar and negative examples as an adversarial constraint. Therefore, models are supposed to capture the subtle differences in meaning among the definitions of words such as love, dear, endearing, and adoration. Table 3 (top) and Table 3 (middle) show the examples.
Furthermore, the easy set has a sub-set called Neologism set, which consists of a newly coined word or expression. Thus, we can evaluate the models' ability even when the words never appear in the (pre-)training data.
To collect neologisms, we refer to the update notes of Oxford Dictionary and consider 'new word entries' as neologisms. We then process them by removing words that require a subscription to see the full definition and references in definitions to other similar words (e.g., See, Cf. and explanations after ';'). The number of collected neologisms is 345. Table 3 (bottom) presents the example of neologism.
We compare BERT variations, such as BERT (Devlin et al., 2019), RoBERTa (Liu et al.,
2019), Sentence-BERT (Reimers and Gurevych, 2019), and GlossBERT (Huang et al., 2019). Besides, we report the performance fine-tuned by masked language modeling on the definition data.
In the masked language training, we set an artificial template that "the definition of W is D and its example is E." As we mentioned in §4, W-W' pairs and W-E pairs are used for model calibration, denoted as [+W'] and [+E], respectively.
We also empirically find the optimized pair selection of DefBERT, which shows the best performances in DefRank, denoted as BestSelect6).
5After we post-process to clean the test data, the ratio becomes approximately 9%
6The best sequence of training is [+E]+W-D+D-E+[+W'].
Model Easy Chal. Neo
Randomized BERT 29.11 26.52 31.01 BERT-base 32.41 25.81 36.52
BERT-base(MLM-FT) 36.32 26.04 29.28
BERT-large 33.91 25.79 36.81
RoBERTa-base 26.07 25.84 62.98
Sentence-BERT 75.08 ∗30.45 65.22
GlossBERT 49.58 26.93 52.17 ConceptNet 83.88 32.58 35.36
DefBERT(W-D) 60.11 27.92 51.59
DefBERT(D-E) 74.28 31.11 **70.72**
DefBERT([+W']) 61.65 29.51 49.28
DefBERT([+E]) 78.55 31.53 68.12 DefBERT([+W']W-D) 74.22 30.59 60.58
DefBERT([+E]W-D) 83.27 32.29 69.28
DefBERT([+W']D-E) 79.04 32.32 68.99 DefBERT([+E]D-E) 80.73 32.54 67.25
DefBERT(BestSelect) **84.67 33.76** 70.43
Finally, we will report the BestSelect model performance.
Results. Table 4 shows the performance on the DefRank task. Considering the high performance of Sentence-BERT, our tasks are well-designed to examine the semantics incorporated in model representations. The results show that fine-tuning by masked language modeling is ineffective in the performances. Besides, GlossBERT does not perform well on these tasks, which implies that the word disambiguation model largely depends on the classifiers at the end of the architecture.
Our variations of DefBERT show much better performance since we train models with a similar distribution. However, it is interesting that D-E
pairs increase the model performances more, even though W-D pairs are directly related to the tasks.
The performance gaps between baselines and our variations are small for the challenge set. Therefore, the challenge set is very hard to distinguish the subtle variation of semantics, requiring a deeper understanding of definitions.
Lastly, we can find several properties of definition pairs. For example, calibrations with only
[+W'] or [+E] make significant improvements to the model. The models starting with calibration perform much better than the models without calibrations. We guess that BERT's self-attention successfully normalize the model. Moreover, DefBERT
Input example "their love for their country" for target word "love" B D
C
∗1an intense feeling of deep affection. 3 1 C2 a great interest and pleasure in something. 2 2 C3 affectionate greetings conveyed to someone on one's behalf.
4 3 C4 a formula for ending an affectionate letter. 1 4
Table 5: Examples in SenseRank task. * means the gold definition. B and D mean the rank predicted by BERT and DefBERT, respectively.
proves to be effective in neologisms. We conjecture that DefBERT learns unseen words (and their tokens) through other words' definitions. We also report the performance of ConceptNet vector (Speer et al., 2017). The representation is a strong baseline since the embeddings are fine-tuned and specialized in a number of tasks regarding word semantics. When evaluation, the sentence vectors are made by averaging the word vectors. ConceptNet shows good performance on the easy set and the challenge set, which also tells us that DefRank correlates with word semantics tasks while hardly being correct in neologism. The combination of various types of lexical resources (e.g., dictionary, relation, WordNet) remains an interesting direction for future work.
## 5.2 Senserank: Sense Disambiguation Task
Setup. Extending from DefRank, we propose another task SenseRank that distinguishes the different senses of definitions for the same word. In this setting, we provide a word and its usage, an example sentence. Then, models select the most appropriate sense of definitions among the word's definitions. Compared to Chang and Chen (2019),
SenseRank has to choose a gold definition among the candidate definitions from the same target word.
Therefore, the task can be used to measure the model's ability to do fine-grained sense disambiguation.
Table 5 shows four definitions for the target word
'love'. Given an example sentence, DefBERT correctly predicts the most similar sense of the definitions, while BERT fails. Similar to the challenge set of DefRank, the candidate definitions in SenseRank are semantically very similar (i.e., the variation of their senses), but this task has more contexts than DefRank.
We filter out the words for which the number of definitions is fewer than four. We then sample 10%
(115,849) as a test set.
| Model | SenseRank |
|---------------------|-------------|
| BERT-base | 54.83 |
| BERT-base(MLM-FT) | 41.33 |
| BERT-large | 27.78 |
| RoBERTa-base | 43.23 |
| Sentence-BERT | 86.59 |
| GlossBERT | 52.25 |
| ConceptNet | 39.38 |
| DefBERT(W-D) | 74.94 |
| DefBERT(D-E) | 97.54 |
| DefBERT([+W']) | 90.02 |
| DefBERT([+E]) | 93.76 |
| DefBERT([+W']W-D) | 92.67 |
| DefBERT([+E]W-D) | 96.24 |
| DefBERT([+W']D-E) | 97.02 |
| DefBERT([+E]D-E) | 96.51 |
| DefBERT(BestSelect) | 97.27 |
Results. Table 6 shows the performances on SenseRank. Similar to the DefRank, the accuracies from BERT variants are relatively low except for Sentence-BERT, which is good at encoding semantics. Apart from D-E pairs that is closely related to SenseRank, other types of data pairs (i.e., W-D), and +W' and +E for calibration) increase the model performances. Also, DefBERT with best selection shows the largest improvement. The results indicate that the setup of DefBERT learns the sensespecific patterns between definitions and examples.
Moreover, ConceptNet performs worse than most of the BERT-variants, showing that context is an important factor in this task.
## 5.3 Downstream Task 1: Word-Similarity
Setup. Word similarity tasks can be used to evaluate word representations. They make use of Spearmann correlations to assess agreement between human ratings and computational representations of the similarity between word pairs. We use the evaluation tasks–WordSim (Finkelstein et al., 2001; Agirre et al., 2009), RareWord (Luong et al., 2013),
MEN (Bruni et al., 2012), SemEval (CamachoCollados et al., 2017), SimLex (Hill et al., 2015),
and SimVerb (Gerz et al., 2016). For DefBERT, we choose the best selection model in DefRank. Note that there is no additional training on the word similarity datasets.
Results. Table 7 shows performances on the word similarity tasks. The other embeddings, except for DefBERT show poor performances. Additional masked language modeling fine-tuning increases the performance only a little. We conjec-
| ρ × 100 | W-S | W-R | RW | MEN | SEM | SL | SV | Avg |
|----------------|-------|-------|------|-------|-------|------|------|-------|
| BERT | 23.1 | 1.8 | 5.3 | 19.1 | 10.8 | 7.2 | 0.8 | 9.7 |
| BERT(FT) | 30.8 | 13.0 | 6.5 | 17.7 | 10.5 | 5.6 | 2.5 | 12.4 |
| Sent-BERT 33.1 | 23.2 | 40.6 | 60.6 | 49.3 | 61.9 | 49.9 | 45.5 | |
| GlossBERT 26.6 | -3.6 | 25.7 | 30.8 | 30.7 | 28.3 | 15.0 | 21.9 | |
| DefBERT | 71.6 | 51.8 | 46.7 | 76.5 | 58.7 | 53.2 | 41.1 | 57.1 |
Table 7: Model performances on word similarity tasks.
WordSim dataset is categorized into semantics (W-S) and relation (W-R).
| TREC | SST2 | IMDB | |
|-----------------------|----------|----------|----------|
| BERT | 97.1(.3) | 92.7(.2) | 93.4(.1) |
| BERT(MLM-FT) 97.3(.3) | 91.4(.4) | 93.5(.1) | |
| Sent-BERT | 97.3(.2) | 91.6(.3) | 93.4(.1) |
| GlossBERT | 96.8(.4) | 91.3(.3) | 92.9(.1) |
| DefBERT | 97.3(.2) | 92.7(.4) | 93.3(.1) |
ture that word similarity/relatedness tasks are very challenging for pretrained and contextualized models because no context is given (see §6 for further discussion). The result is the same as what we found in our preliminary distance analysis on word-definition pairs. On the other hand, DefBERT
largely closes the gaps among word, definition, and usage, which leads to significant improvements from BERT in all the datasets.
## 5.4 Downstream Task 2: Short Text Classification
Setup. As we mentioned in §1 and showed in the previous experiments §5, BERT embedding for a word or short text did not make good representations. In order to generalize the effect of our integration, we employ text classification datasets–
TREC (Hovy et al., 2001), SST-2 (Socher et al.,
2013), and IMDB (Maas et al., 2011). All the datasets are relatively small, and the text length is short in TREC and SST-2, whereas IMDB is rather long. We report IMDB performance to show the performance of long text.
As the original paper did, we use a [CLS] token at the last hidden layers. The hyperparameters are 2e-5 for learning rate, 32 for mini-batch size. We use Adam optimizer (Kingma and Ba, 2014). If the dataset does not have a validation set, we assign 15% of the training set and use them for earlystopping. The maximum length of tokens is 512.
Results. We present the performance of text classification in Table 8. Compared to other methods, DefBERT shows comparable performance with other baselines. Although the performance gap is small (we guess that the baseline is already strong),
DefBERT shows the best performance on the shortest dataset TREC, which has only maximum 37 words (by space). On the other hand, IMDB has approximately maximum 3000 words. Though GlossBERT is also fine-tuned by external data (specifically, gloss), the result indicates that word disambiguation tasks are not related to representing a single word or short sentence.
## 6 Conclusion And Further Discussion
We present a novel way of combining pretrained contextualized representations and human-written definitions from a dictionary. We first collect definitions and examples from an online dictionary Oxford+. Our analyses with the dictionary show that BERT's representations do not incorporate human-written definitions. Motivated by the findings, we develop a new representation DefBERT,
by constraining BERT to human-written definitions in the dictionary. In the experiments, we first proposed definition ranking (DefRank) and sense disambiguation tasks (SenseRank) and DefBERT outperforms other baselines. We also presented the effectiveness of DefBERT in downstream tasks: word similarity benchmark and short text classification tasks.
One of the contributions of this paper is to make researchers revisit the old and traditional resource, dictionaries. While resources, including synonyms, antonyms, and other relations, are widely used to improve models as a constraint, dictionaries are less frequently used. However, the dictionary is the basic form of word semantics and is a relatively objective resource compared to relational resources.
Furthermore, word-related resources are hard to align with pretrained language models because the weights are dynamic according to contexts. Therefore, pouring resources can occur catastrophic forgetting that the information previously trained disappears. On this problem, we suggest a potential approach to enhance semantics on the pretrained weights, maintaining the nature of contextualized encoder.
## 7 Limitations
The performances except for the proposed tasks.
We presented the result of neologism and the performances on two downstream tasks (i.e., word similarity task and short text classification), which are closely related to the understanding of word semantics. The selected downstream tasks are challenging for the contextualized models; they can use only a few contexts to make a representation.
The performance in general benchmarks (e.g.,
GLUE) is almost the same as the vanilla BERT
because our model suffers catastrophic forgetting while learning definition information. Sophisticated modeling and training processes to overcome the problem could be interesting future work.
The use of other models Other pretrained models like RoBERTa could be a base model of our method (e.g., DefRoBERTa). However, we think that BBPE tokens scarcely have semantic meanings, which makes it hard to find appropriate tokens to inject definition information. Therefore, integrating human-written definitions with other types of tokens (e.g., Byte-Pair Encoding and Byte-level BPE) is also a future direction.
## The Use Of All The Loss Function & **Collect More**
definition data. Presenting more experiments with other models, other collections of definition data, and other loss functions will further support our idea. Nevertheless, we want to show the performances with the widely-used basic model of pretrained language models (i.e., BERT), using definition data from the previous work, with various loss functions (e.g., W-D, D-E, [+W'], [+E])
as many as possible. A fine-grained combination of all the loss functions could make further improvements.
## Acknowledgement
The author would like to thank previous co-workers who discussed this idea long ago, including reviewers in several rounds of submission. Also, PAUST
gave helpful advice on experimental techniques and distributed software engineering. Lastly, I am grateful to Alice Lee for help in writing this work.
## References
Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalová, Marius Pasca, and Aitor Soroa. 2009. A
study on similarity and relatedness using distributional and wordnet-based approaches. In *Proceedings of Human Language Technologies: The 2009* Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 19–27.
Dzmitry Bahdanau, Tom Bosc, Stanisław Jastrz˛ebski, Edward Grefenstette, Pascal Vincent, and Yoshua Bengio. 2017. Learning to compute word embeddings on the fly. *arXiv preprint arXiv:1706.00286*.
Elia Bruni, Gemma Boleda, Marco Baroni, and NamKhanh Tran. 2012. Distributional semantics in technicolor. In *Proceedings of the 50th Annual Meeting* of the Association for Computational Linguistics (Volume 1: Long Papers), pages 136–145.
Jose Camacho-Collados, Mohammad Taher Pilehvar, Nigel Collier, and Roberto Navigli. 2017. SemEval2017 task 2: Multilingual and cross-lingual semantic word similarity. In *Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval2017)*, pages 15–26, Vancouver, Canada. Association for Computational Linguistics.
Ting-Yun Chang and Yun-Nung Chen. 2019. What does this word mean? explaining contextualized embeddings with natural language definition. In *Proceedings of the 2019 Conference on Empirical Methods* in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6066–6072.
Ting-Yun Chang, Ta-Chung Chi, Shang-Chi Tsai, and Yun-Nung Chen. 2018. xsense: Learning senseseparated sparse representations and textual definitions for explainable word sense networks. arXiv preprint arXiv:1809.03348.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–
4186.
Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015.
Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1606–1615, Denver, Colorado. Association for Computational Linguistics.
Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In *Proceedings of the 10th international conference on World Wide Web*, pages 406–414.
Artyom Gadetsky, Ilya Yakubovskiy, and Dmitry Vetrov.
2018. Conditional generators of words definitions.
In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 266–271.
Daniela Gerz, Ivan Vulic, Felix Hill, Roi Reichart, and ´
Anna Korhonen. 2016. Simverb-3500: A large-scale evaluation set of verb similarity. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2173–2182.
Ping Guo, Yue Hu, and Yunpeng Li. 2020. Mg-bert: A
multi-glosses bert model for word sense disambiguation. In *International Conference on Knowledge Science, Engineering and Management*, pages 263–275.
Springer.
Felix Hill, KyungHyun Cho, Anna Korhonen, and Yoshua Bengio. 2016. Learning to understand phrases by embedding the dictionary. Transactions of the Association for Computational Linguistics, 4:17–
30.
Felix Hill, Roi Reichart, and Anna Korhonen. 2015.
Simlex-999: Evaluating semantic models with (genuine) similarity estimation. *Computational Linguistics*, 41(4):665–695.
Eduard Hovy, Laurie Gerber, Ulf Hermjakob, ChinYew Lin, and Deepak Ravichandran. 2001. Toward semantics-based answer pinpointing. In *Proceedings* of the First International Conference on Human Language Technology Research.
Luyao Huang, Chi Sun, Xipeng Qiu, and Xuan-Jing Huang. 2019. Glossbert: Bert for word sense disambiguation with gloss knowledge. In *Proceedings of* the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 3500–3505.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
Roberta: A robustly optimized bert pretraining approach. *arXiv preprint arXiv:1907.11692*.
Minh-Thang Luong, Richard Socher, and Christopher D
Manning. 2013. Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 104–113.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analysis.
In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. *CoRR*, abs/1301.3781.
George A Miller. 1995. Wordnet: a lexical database for english. *Communications of the ACM*, 38(11):39–41.
Thanapon Noraset, Chen Liang, Lawrence A Birnbaum, and Douglas C Downey. 2017. Definition modeling: Learning to define word embeddings in natural language. In *31st AAAI Conference on Artificial Intelligence, AAAI 2017*.
Oxford University Press. 2020. a new collaboration between dictionary.com and oxford university press
(oup). http://lexico.com/.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In *EMNLP*, pages 1532–1543.
Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In *Proceedings of NAACL-HLT*, pages 2227–2237.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
In *Proceedings of the 2019 Conference on Empirical* Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3973–3983.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.
In *Proceedings of the 2013 conference on empirical methods in natural language processing*, pages 1631–1642.
Robyn Speer, Joshua Chin, and Catherine Havasi. 2017.
Conceptnet 5.5: an open multilingual graph of general knowledge. In *Proceedings of the Thirty-First* AAAI Conference on Artificial Intelligence, pages 4444–4451.
Julien Tissier, Christophe Gravier, and Amaury Habrard.
2017. Dict2vec: Learning word embeddings using lexical dictionaries. In *Proceedings of the 2017 Conference on Empirical Methods in Natural Language* Processing, pages 254–263.
Peter D. Turney. 2013. Distributional semantics beyond words: Supervised learning of analogy and paraphrase. *TACL*, 1:353–366.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. Huggingface's transformers: State-of-the-art natural language processing. *ArXiv*, abs/1910.03771.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
7
✗ A2. Did you discuss any potential risks of your work?
Overall process has no potential risk
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** 5
✗ C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used?
We used well-known BERT.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a *question on AI writing* assistance.
✓ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
5
✓ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
5
✓ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
5
## D ✗ **Did You Use Human Annotators (E.G., Crowdworkers) Or Research With Human Participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
ravfogel-etal-2023-conformal | Conformal Nucleus Sampling | https://aclanthology.org/2023.findings-acl.3 | Language models generate text based on successively sampling the next word. A decoding procedure based on nucleus (top-$p$) sampling chooses from the smallest possible set of words whose cumulative probability exceeds the probability $p$. In this work, we assess whether a top-$p$ set is indeed aligned with its probabilistic meaning in various linguistic contexts.We employ conformal prediction, a calibration procedure that focuses on the construction of minimal prediction sets according to a desired confidence level, to calibrate the parameter $p$ as a function of the entropy of the next word distribution. We find that OPT models are overconfident, and that calibration shows a moderate inverse scaling with model size. | # Conformal Nucleus Sampling
Shauli Ravfogel1,2 **Yoav Goldberg**1,2 **Jacob Goldberger**1 1Bar-Ilan University 2Allen Institute for Artificial Intelligence
{shauli.ravfogel, yoav.goldberg}@gmail.com , [email protected]
## Abstract
Language models generate text based on successively sampling the next word. A decoding procedure based on nucleus (top-p)
sampling chooses from the smallest possible set of words whose cumulative probability exceeds the probability p. In this work, we assess whether a top-p set is indeed aligned with its probabilistic meaning in various linguistic contexts. We employ conformal prediction, a calibration procedure that focuses on the construction of minimal prediction sets according to a desired confidence level, to calibrate the parameter p as a function of the entropy of the next word distribution. We find that OPT models are overconfident, and that calibration shows a moderate inverse scaling with model size.
https://github.com/shauli-ravfogel/
conformal-prediction
## 1 Introduction
Modern language generation methods are all based on computing the conditional next-word distribution. However, there is still considerable debate about the best way to extract the next word from that distribution. Most current text generation methods employ one of a handful of standard decoding strategies, which are characterized as either deterministic or stochastic in nature. A greedy search strategy selects the word with the highest probability at each timestep. The greedy method and its beam search variations work remarkably well for machine translation but outside of this context, tend to return dull text or degenerate text
(Holtzman et al., 2020; Cohen and Beck, 2019).
Holtzman et al. (2020) argued that high-quality human language does not follow a pattern of highestprobability next words, as humans expect the generated text to not be repetitive or boring. The same problem occurs with beam search.
Direct sampling from the next-word distribution computed by the model often generates incoherent gibberish text. Temperature sampling (Ackley et al., 1985) is a word sampling approach based on rescaling logit scores before applying the softmax function to compute the word distribution. Other methods limit the sampling space to a small **prediction set** to avoid the "unreliable tail" (Holtzman et al., 2020). In top-k sampling (Fan et al., 2018),
we sample only from the top-k most likely words.
Instead of sampling only from the most likely k words, top-p (nucleus) sampling chooses from the smallest possible set of words whose cumulative probability exceeds the probability p (Holtzman et al., 2020). Top-p sampling enables a dynamically sized window of words, unlike top-k which fixes the size of k for every step. Finally, locally typical sampling (Meister et al., 2022) and truncation sampling (Hewitt et al., 2022) are recent variants of top-p that aim to make it more suitable for language generation.
The top-p prediction set has a concrete probabilistic interpretation. Here we examine whether the probability that the "correct" word belongs to the set of words produced by the top-p algorithm is indeed p. More generally we expect that the nextword prediction would be calibrated, meaning that the output of the next-word softmax layer would accurately reflect the true word distribution. Parametric calibration methods, such as Temperature Scaling (Guo et al., 2017), which adjust the confidence of the most probable word, are not suitable for adjusting the size of the prediction set. Conformal Prediction (CP) (Vovk et al., 1999, 2005; Shafer and Vovk, 2008; Angelopoulos and Bates, 2021) is a non-parametric calibration method that, given a value p, aims to build a prediction set with a guarantee that the probability that the correct word is within this set is indeed p. Note that this notion of calibration, which is distinct from the way calibration is usually formulated in language modeling settings, *exactly coincides* with the goal of the top-p prediction model. The model-agnostic and distribution-free nature of CP makes it particularly suitable for large neural network models. We thus applied CP analysis to asses whether the top-p procedure is calibrated and, if needed, tune it to have the desired probabilistic interpretation. We find that OPT models of different sizes (Zhang et al.,
2022) are not calibrated according to the conformal prediction theory, and that calibration shows moderate inverse scaling. Additionally, we show that the degree of calibration varies significantly with the entropy of the model's distribution over the vocabulary. We thus propose a new Conformal top-p **decoding** algorithm, which ensures that the top-p sampling has a meaningful probabilistic interpretation.
## 2 Cp For Language Generation
In this section, we briefly review the Split Conformal Prediction algorithm (Vovk et al., 2005) and discuss its relevance to language generation models. Consider a network that classifies an input x into k pre-defined classes. The network (softmax layer) output has the mathematical form of a distribution. However, this does not necessarily mean that it accurately reflects the true class distribution.
Let (*x, y*) be a test instance and its corresponding class. We want to find a small subset of classes
(a prediction set) C(x) ⊂ {1*, ..., k*} such that
$$p(y\in C(x))\geq1-\alpha$$
where 1−α ∈ [0, 1] is a user-chosen error rate.
(We use the term 1−α instead of p to comply with CP standard notation). In words, the probability that the set C(x) contains the correct label is at least 1 − α. We call this property the marginal coverage since the probability is averaged over all the data points (*x, y*). Denote the prediction set obtained by taking the most probable classes until the total mass just exceeds a value q, by Cq(x).
Let qˆ ∈ [0, 1] be the smallest threshold value that p(y ∈ Cqˆ(x)) ≥ 1−α. If q >ˆ 1−α the model can be viewed as over-confident. If q <ˆ 1−α the model can be viewed as under-confident and if qˆ = 1−α the model is calibrated in the sense that the probability that the correct label is in the 1−α prediction set is indeed 1−α.
If the model is not calibrated, we can calibrate it using a labeled validation set (x1, y1), ...,(xn, yn).
Denote pt(i) = p(yt = i|xt; θ). Define the **conformal scores** to be:
$$s_{t}=\sum_{\{i|p_{t}(i){\geq}p_{t}(y_{t})\}}p_{t}(i)\quad t=1,...,n\quad\mathrm{(2)}$$
This CP score is known as the Adaptive Prediction Sets (APS) score, and was first introduced in (Romano et al., 2020). Note that yt ∈ Cst(xt) and st is the minimal threshold in which the true class yt is in a prediction set of xt.
We next look for **a minimal threshold** qˆ such that the correct label ytis included in the prediction set Cqˆ(xt) for at least (1−α)n points of the validation set. In other words, qˆ calibrates the top-(1−α)
prediction-set on the validation set. We can easily find qˆ by first sorting the n scores s1*, ..., s*n and then qˆ is the (1−α)-quantile of the validation-set scores. Once the network is calibrated, if we want to form a prediction set for a new test sample x, that contains the true class with probability (1−α),
we use Cqˆ(x). The CP Calibration procedure for calibrating the top-p word decoding is summarized in Algorithm 1. The conformal prediction theory provides the following guarantee on the threshold qˆ (Vovk et al., 2005).
Theorem: Assume a test point (*x, y*) and the n validation points are independent and identically distributed (or at least exchangeable). Let qˆ be the
⌈(n+ 1)(1−α)/n⌉-quantile of the validation set scores. Then
$$1-\alpha\leq p(y\in C_{\hat{q}}(x))\leq1-\alpha+{\frac{1}{n+1}}.\quad\quad(3)$$
$$(1)$$
Note that this is a marginal probability over all the test points and is not conditioned on a given input. Exchangeability means that the sequence distribution is not altered by permuting the order of the random variables.
In this study, we aim to apply the conformal prediction framework to language generation models to analyze the prediction sets used for sampling the next word. The joint distribution of words in a text is neither IID nor exchangeable, since the words are correlated and the order of the words in a sentence is significant. A recent study (Oliveira et al.,
2022) showed that applying the usual CP algorithm to a stationary β-mixing process (rather than an exchangeable one) results in a guaranteed coverage level of 1−α−η, where η depends on the mixing properties of the process and is theoretically hard to know, or bound. Roughly speaking, β-mixing processes are stochastic processes in which far-away
## Algorithm 1 Cp Calibration Of The Top-P Decoding
Input: A validation set comprised of next word distributions p1*, .., p*n with the corresponding correct words y1*, .., y*n and a confidence level p.
for t = 1*, ..., n* do st =P{i|pt(i)≥pt(yt)}
pt(i)
end for Define qˆ to be the ⌈(n + 1)p/n⌉-quantile of
{s1*, ..., s*n}.
Output: Use top-qˆ decoding to guarantee that the probability that the correct word is in the top-qˆ prediction set is at least p.
![2_image_0.png](2_image_0.png)
![2_image_2.png](2_image_2.png)
![2_image_1.png](2_image_1.png)
points are approximately independent in a quantifiable manner. In all the examples they checked, the authors assessed that the additional penalty incurred by using CP with stationary β-mixing processes was virtually insignificant. Manning and Schutze (1999) argue that even though not quite correct, natural language can be modeled as stationary, ergodic processes. Khandelwal et al. (2018)
showed that the LSTM language model's memory is empirically bounded at roughly 200 words and thus the model can be viewed as an aperiodic recurrent (and therefore β-mixing) Markov chain. It is reasonable to assume that human language and transformer-based language models can also be modeled as β-mixing processes. Hence, applying CP to language generation models yields meaningful results (at least qualitatively).
## 3 Experiments
In this section, we apply the conformal prediction calibration method to analyze the calibration status of the top-p nucleus sampling.
Setup. We experimented with variants—from 125M parameters up to 30B parameters—of OPT
(Zhang et al., 2022), a left-to-right language model.
We ran the models on 10,000 English Wikipedia sentences1, and collected the distribution of the vocabulary over each token in each sentence, resulting in a total of 245,923 distributions. The distribution of the entropy values, as well as the maximum probability, was far from being uniform (Fig. 1). We sorted all the instances by entropy, and calibrated the examples belonging to each equally-sized percentile independently (from 0-10% to 90-100%).
The patterns are highly similar across models. We report results on the 350M parameters model unless specified otherwise. We use Nvidia 2080TI
GPUs.
Dependency of the confidence on the entropy.
First, we evaluated the confidence scores of a standard nucleus sampling scheme. We chose p = 0.9
(a commonly used value) and recorded the effective confidence, i.e., the proportion of cases where the correct word was indeed in the top-p prediction set. Fig. 2 shows the effective confidence for the 1https://huggingface.co/datasets/wikipedia predictions belonging to different percentiles of entropy. The results indicated that setting p = 0.9 did not translate to a prediction set that contained the correct token in 90% of the cases, motivating our calibrated decoding. In Fig. 3, we show the per-entropy CP calibration results, for 10 entropy bins corresponding to percentiles. While the model was always overconfident, the level of overconfidence decreases with the entropy percentile. In other words, when the model is apparently the most certain—as reflected in low entropy values—it is most overconfident. Note that in the case of low entropy the single highest probability can be more than 0.9. Hence, there is no way to calibrate the prediction set by changing its size. In particular, we found that the model is overconfident when the gold token is a function word: it tends to allocate high probability to a small set of function words, while the true distribution is more varied.
Calibration and scale. Fig. 4 presents the conformal threshold values qˆversus desired confidence
(1−α), when calibration is performed over the entire validation set (without partition to entropy bins).
As shown, for all confidence levels, the threshold qˆ
needed to ensure that the correct word is included within the prediction set is larger than the confidence level itself (the y = x dashed line). This indicates that the model is *overconfident*. Fig. 4 also shows the dependency of calibration on the scale. Scaling language models has been shown to induce the emergence of new abilities, such as in-context learning (Brown et al., 2020). Empirical power laws were shown to predict performance in a different task as a function of scale (Kaplan et al.,
2020; Wei et al., 2022a), where models usually show improved performance with scale. Here, we find *inverse scaling* (Wei et al., 2022b), where calibration moderately deteriorates with model scale.
Generation. How does conformal p sampling affect generation? we use the 350M model to compare the quality of generation of conformal p sampling with the natural baseline of p sampling. We generate continuations to 1,000 prompts of size 35 words from the OpenWebText dataset 2. We generate up to length 200 tokens, and compare conformal p = 0.9 prediction (setting 1 − α = 0.9)
with conventional p = 0.9 sampling.3 Following Fig. 3, when applying our method, we calculate the
![3_image_0.png](3_image_0.png)
entropy of the output distribution over each token, and dynamically set the threshold p for each token prediction, according to the threshold value qˆ that fits this entropy percentile. This ensures that the true probability of the token to be included within the prediction set (according to the training set used for calibration) is 0.9.
We evaluate the quality of the generation using MAUVE (Pillutla et al., 2021) and BERTScore
(Zhang et al., 2019).4 MAUVE score is 0.933 for conformal-p sampling, and 0.0.920 for conventional p sampling. As for BERTScore, the F1 score is 0.840 for conformal-p sampling, and 0.843 for conventional p sampling. These results indicate that conformal-p sampling is performing similarly to conventional p sampling. Applicability of CP to non IID data Conformal prediction theory assumes IID, while we build on the model outputs distributions over consecutive tokens in the same sentence, which are of course highly dependent. We repeated the per-entropy-bin calibration process when uniformly sampling a *single* token per sentence, thus (almost) satisfying the independence assumption. The results were similar to Fig. 3 and in that case, Eq. (3)) is applicable.
## 4 Conclusions
To conclude, in this study we apply the notion of calibration by conformal prediction to calibrate the top-p nucleus sampling as a function of the next word distribution entropy and thus made the top-p decoding policy consistent. The same analysis and 4Default HuggingFace v4.22.0 Parameters were used.
calibration can also be applied to other commonly used decoding methods, such as variants of top-p (Meister et al., 2022) and truncation sampling
(Hewitt et al., 2022).
## Limitations
We calibrated OPT models based on Wikipedia data. Future work should apply calibration procedure to a wider range of datasets, to check whether our results generalize to different domains. Additionally, we limited our evaluation to entropy as a measure of uncertainty and did not explore other measures. Finally, we aimed at validating the calibration status of commonly used LMs. Future work should thoroughly evaluate the impact of the calibration status on different facets of generation quality, as text generation is one of the main usecases of large LMs.
## Ethics Statement
We do not foresee ethical issues with this work.
## Acknowledgements
This project received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT). Shauli Ravfogel is grateful to be supported by the Bloomberg Data Science Ph.D. Fellowship.
## References
David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. 1985. A learning algorithm for boltzmann machines. *Cognitive science*, 9(1):147–169.
Anastasios N Angelopoulos and Stephen Bates. 2021.
A gentle introduction to conformal prediction and distribution-free uncertainty quantification. *arXiv* preprint arXiv:2107.07511.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
Eldan Cohen and Christopher Beck. 2019. Empirical analysis of beam search performance degradation in neural sequence models. In International Conference on Machine Learning (ICML).
Angela Fan, Mike Lewis, and Yann Dauphin. 2018.
Hierarchical neural story generation. arXiv preprint arXiv:1805.04833.
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In *International Conference on Machine* Learning (ICML).
John Hewitt, Christopher D Manning, and Percy Liang.
2022. Truncation sampling as language model smoothing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations (ICLR).
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B
Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020.
Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.
Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky.
2018. Sharp nearby, fuzzy far away: How neural language models use context. In Proceedings of the Annual Meeting of the Association for Computational Linguistics.
Christopher Manning and Hinrich Schutze. 1999. *Foundations of statistical natural language processing*.
MIT press.
Clara Meister, Tiago Pimentel, Gian Wiher, and Ryan Cotterell. 2022. Typical decoding for natural language generation. *arXiv preprint arXiv: 2202.00666*.
Roberto I Oliveira, Paulo Orenstein, Thiago Ramos, and João Vitor Romano. 2022. Split conformal prediction for dependent data. arXiv preprint arXiv:2203.15885.
Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui. 2021. Mauve: Measuring the gap between neural text and human text using divergence frontiers. *Advances in Neural Information Processing Systems*, 34:4816–4828.
Yaniv Romano, Matteo Sesia, and Emmanuel Candes.
2020. Classification with valid and adaptive coverage.
Advances in Neural Information Processing Systems.
Glenn Shafer and Vladimir Vovk. 2008. A tutorial on conformal prediction. *Journal of Machine Learning* Research, 9(3).
Vladimir Vovk, Alexander Gammerman, and Glenn Shafer. 2005. *Algorithmic learning in a random* world. Springer Science & Business Media.
Volodya Vovk, Alexander Gammerman, and Craig Saunders. 1999. Machine-learning applications of algorithmic randomness. In International Conference on Machine Learning.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022a. Emergent abilities of large language models.
arXiv preprint arXiv:2206.07682.
Jason Wei, Yi Tay, and Quoc V Le. 2022b. Inverse scaling can become u-shaped. *arXiv preprint* arXiv:2211.02011.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022.
Opt: Open pre-trained transformer language models.
arXiv preprint arXiv:2205.01068.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q
Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. *arXiv preprint* arXiv:1904.09675.
## Acl 2023 Responsible Nlp Checklist A For Every Submission:
✓ A1. Did you describe the limitations of your work?
"Limitations"
✗ A2. Did you discuss any potential risks of your work?
We do not foresee risks from this work.
✓ A3. Do the abstract and introduction summarize the paper's main claims?
1
✗ A4. Have you used AI writing assistants when working on this paper?
Left blank.
## B ✗ **Did You Use Or Create Scientific Artifacts?**
Left blank.
B1. Did you cite the creators of artifacts you used?
No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
## C ✓ **Did You Run Computational Experiments?** Left Blank.
C1. Did you report the number of parameters in the models used, the total computational budget
(e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
Not applicable. 3 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
Not applicable. Left blank.
C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE,
etc.)?
Not applicable. Left blank.
D ✗ **Did you use human annotators (e.g., crowdworkers) or research with human participants?**
Left blank.
D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
No response.
D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students)
and paid participants, and discuss if such payment is adequate given the participants' demographic
(e.g., country of residence)? No response.
D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
No response.
D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
No response.
D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
No response. |
chan-etal-2023-discoprompt | {D}isco{P}rompt: Path Prediction Prompt Tuning for Implicit Discourse Relation Recognition | https://aclanthology.org/2023.findings-acl.4 | "Implicit Discourse Relation Recognition (IDRR) is a sophisticated and challenging task to recognize(...TRUNCATED) | "# Discoprompt: Path Prediction Prompt Tuning For Implicit Discourse Relation Recognition\n\nChunkit(...TRUNCATED) |
cao-jiang-2023-modularized | Modularized Zero-shot {VQA} with Pre-trained Models | https://aclanthology.org/2023.findings-acl.5 | "Large-scale pre-trained models (PTMs) show great zero-shot capabilities. In this paper, we study ho(...TRUNCATED) | "# Modularized Zero-Shot Vqa With Pre-Trained Models\n\nRui Cao and **Jing Jiang**\nSchool of Comput(...TRUNCATED) |
tan-etal-2023-timelineqa | {T}imeline{QA}: A Benchmark for Question Answering over Timelines | https://aclanthology.org/2023.findings-acl.6 | "Lifelogs are descriptions of experiences that a person had during their life. Lifelogs are created (...TRUNCATED) | "# Timelineqa: A Benchmark For Question Answering Over Timelines\n\nWang-Chiew Tan, Jane Dwivedi-Yu,(...TRUNCATED) |
lam-etal-2023-abstractive | Abstractive Text Summarization Using the {BRIO} Training Paradigm | https://aclanthology.org/2023.findings-acl.7 | "Summary sentences produced by abstractive summarization models may be coherent and comprehensive, b(...TRUNCATED) | "\n## Abstractive Text Summarization Using The Brio Training Paradigm\n\nKhang Nhut Lam Can Tho Univ(...TRUNCATED) |
wu-etal-2023-modeling | Modeling the {Q}-Diversity in a Min-max Play Game for Robust Optimization | https://aclanthology.org/2023.findings-acl.8 | "Models trained with empirical risk minimization (ERM) are revealed to easily rely on spurious corre(...TRUNCATED) | "\n## Modeling The Q**-Diversity In A Min-Max Play Game** For Robust Optimization\n\nTing Wu1, Rui Z(...TRUNCATED) |
chen-etal-2023-pre | Pre-training Language Model as a Multi-perspective Course Learner | https://aclanthology.org/2023.findings-acl.9 | "ELECTRA, the generator-discriminator pre-training framework, has achieved impressive semantic const(...TRUNCATED) | "Pre-training Language Model as a Multi-perspective Course Learner Beiduo Chen§‡∗\n, Shaohan Hu(...TRUNCATED) |
tsymboi-etal-2023-layerwise | Layerwise universal adversarial attack on {NLP} models | https://aclanthology.org/2023.findings-acl.10 | "In this work, we examine the vulnerability of language models to universal adversarial triggers (UA(...TRUNCATED) | "\n## Layerwise Universal Adversarial Attack On Nlp Models\n\n# Olga Tsymboi1, 2, Danil Malaev1, **A(...TRUNCATED) |