{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:38:18.709486Z"
},
"title": "Are Some Words Worth More than Others?",
"authors": [
{
"first": "Shiran",
"middle": [],
"last": "Dudy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Understanding Oregon Health & Science University Portland",
"location": {
"settlement": "Oregon",
"country": "USA"
}
},
"email": "dudy@ohsu.edu"
},
{
"first": "Steven",
"middle": [],
"last": "Bedrick",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Understanding Oregon Health & Science University Portland",
"location": {
"settlement": "Oregon",
"country": "USA"
}
},
"email": "bedricks@ohsu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Current evaluation metrics for language modeling and generation rely heavily on the accuracy of predicted (or generated) words as compared to a reference ground truth. While important, token-level accuracy only captures one aspect of a language model's behavior, and ignores linguistic properties of words that may allow some mis-predicted tokens to be useful in practice. Furthermore, statistics directly tied to prediction accuracy (including perplexity) may be confounded by the Zipfian nature of written language, as the majority of the prediction attempts will occur with frequently-occurring types. A model's performance may vary greatly between high-and low-frequency words, which in practice could lead to failure modes such as repetitive and dull generated text being produced by a downstream consumer of a language model. To address this, we propose two new intrinsic evaluation measures within the framework of a simple word prediction task that are designed to give a more holistic picture of a language model's performance. We evaluate several commonly-used large English language models using our proposed metrics, and demonstrate that our approach reveals functional differences in performance between the models that are obscured by more traditional metrics.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Current evaluation metrics for language modeling and generation rely heavily on the accuracy of predicted (or generated) words as compared to a reference ground truth. While important, token-level accuracy only captures one aspect of a language model's behavior, and ignores linguistic properties of words that may allow some mis-predicted tokens to be useful in practice. Furthermore, statistics directly tied to prediction accuracy (including perplexity) may be confounded by the Zipfian nature of written language, as the majority of the prediction attempts will occur with frequently-occurring types. A model's performance may vary greatly between high-and low-frequency words, which in practice could lead to failure modes such as repetitive and dull generated text being produced by a downstream consumer of a language model. To address this, we propose two new intrinsic evaluation measures within the framework of a simple word prediction task that are designed to give a more holistic picture of a language model's performance. We evaluate several commonly-used large English language models using our proposed metrics, and demonstrate that our approach reveals functional differences in performance between the models that are obscured by more traditional metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Language models are foundational components in many NLP systems, and as such it is crucial to be able to empirically evaluate their behavior. Traditionally, language models are evaluated using performance metrics that relate to the model's ability to accurately predict words given some context (e.g., perplexity). Following the paradigm described by Galliers and Sp\u00e4rck Jones (1993) , this can be thought of as an intrinsic evaluation criterion (and perplexity an intrinsic metric), as it relates to the objective of the language model itself.",
"cite_spans": [
{
"start": 351,
"end": 383,
"text": "Galliers and Sp\u00e4rck Jones (1993)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent years, it has become common to also evaluate language models extrinsically, in terms of the model's function. This is done by measuring a model's performance when used as a component in a downstream task. 1 For example, Devlin et al. (2019) evaluated BERT by using it as the language model component in benchmark tasks such as question answering and \"commonsense inference.\" 2 This shift towards extrinsic and task-oriented evaluation is welcome, and has the potential to make language model evaluation more ecologically valid. 3 As useful as task-oriented evaluation metrics are, however, we believe that this approach brings with it certain practical limitations, and that there remains a strong need for robust and meaningful intrinsic evaluation metrics that can be used to characterize and compare the performance of language models.",
"cite_spans": [
{
"start": 230,
"end": 250,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 538,
"end": 539,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we outline and propose a variation on the standard next-word-prediction language modeling task that is designed for use in evaluating and comparing language models and is robust to implementation differences (tokenization method, etc.) that complicate the comparison of modern models in terms of token-level predictions. Our proposed metrics are richer and more meaningful measures than traditional intrinsic metrics such as perplexity, which is insensitive to which tokens are matched, and as such may be 1 In part, this trend has been driven by the increasing use of downstream tasks as ancillary training objective functions; this somewhat confuses the traditional notion of intrinsic and extrinsic evaluation as a binary construct.",
"cite_spans": [
{
"start": 520,
"end": 521,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 SQuAD versions 1.1 (Rajpurkar et al., 2016) and 2.0 (Rajpurkar et al., 2018) , and SWAG (Zellers et al., 2018) , respectively, in the case of the original BERT paper.",
"cite_spans": [
{
"start": 21,
"end": 45,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 54,
"end": 78,
"text": "(Rajpurkar et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 90,
"end": 112,
"text": "(Zellers et al., 2018)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3 \"Ecological validity\" is a dimension of experimental validity that is concerned with the question of whether an observed effect reflects \"what happens in everyday life\" (Brewer and Crano, 2014) , i.e. beyond the artificial setting of the experiment itself. In an NLP context, a researcher working on question answering who was concerned with ecological validity would ensure that the questions on which they trained and evaluated their system were similar (in form and content) to those on which the system was designed to be used. confounded by distributional properties of their evaluation corpora. Our approach accounts not only for the accuracy of a model's word predictions, but also the diversity of types that it predicts, across different lexical frequency bins. We further propose a formulation for the next-word-prediction task that explicitly allows for language-and task-level details to be captured in the resulting metrics, thereby blurring the line between intrinsic and extrinsic language model evaluation. Our methods provide greater ecological validity than traditional intrinsic evaluation methods, while still remaining simple to interpret and easy to calculate.",
"cite_spans": [
{
"start": 171,
"end": 195,
"text": "(Brewer and Crano, 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For our present purposes, we will consider a language model to be a model that, given a sequence W of n tokens w 1:n from a fixed vocabulary of types V , estimates the joint probability of P (W ). The goal of a language models is of learning to approximate the distribution of tokens and types in some corpus. Importantly, different models may use different units of prediction, at the level of individual character, at the word level, or (as with many modern neural models) at the level of a sub-word/sub-sentence unit (via e.g. byte-pair encoding (Sennrich et al., 2016) , wordpieces (Wu et al., 2016), etc.) . Given such a model, we can typically also estimate the conditional probability distribution P (w t |w 1 ... w t\u22121 ), over possible words occurring after a given history h consisting of t\u22121 tokens. We refer to this as the next-word-prediction problem 4 of predicting\u0175 t = argmax w P (w|h). Using the terminology of conditional text generation, this is akin to generating a single token via greedy decoding given a context. This is of more than theoretical interest from a language modeling perspective. Language models trained using the standard cross-entropy loss function are in effect being optimized to perform this very task, and furthermore, many NLP applications rely in practice on effective and robust word prediction.",
"cite_spans": [
{
"start": 549,
"end": 572,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF24"
},
{
"start": 586,
"end": 610,
"text": "(Wu et al., 2016), etc.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formalities: Language Models and Word Prediction",
"sec_num": "1.1"
},
{
"text": "A standard and widely-used metric for evaluating language model performance is with perplexity (P P X), which is closely related to this prediction task. When computed for a given token prediction event by a language model, P P X captures how \"predictable\" that event was for the model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalities: Language Models and Word Prediction",
"sec_num": "1.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P P X(p,q)=\u2212 X p(x)logq(x)",
"eq_num": "(1)"
}
],
"section": "Formalities: Language Models and Word Prediction",
"sec_num": "1.1"
},
{
"text": "4 Also known as the \"Shannon Game\" (Shannon, 1951) .",
"cite_spans": [
{
"start": 35,
"end": 50,
"text": "(Shannon, 1951)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formalities: Language Models and Word Prediction",
"sec_num": "1.1"
},
{
"text": "Where X corresponds to V (the model's vocabulary of possible tokens it must choose between), p(x) represents the \"true\" or \"target\" distribution and q(x) the model's estimated distribution. The closer the predicted distribution matches the target distribution, the lower the perplexity. When averaged over many prediction events, and computed on a held-out test dataset, perplexity attempts to capture the degree to which the model has optimally learned to represent its target distribution. A more accurate (i.e., \"better\") model should result in lower average perplexity (as the model will more often predict a high probability for the correct target).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formalities: Language Models and Word Prediction",
"sec_num": "1.1"
},
{
"text": "Perplexity is a classic example of an intrinsic evaluation metric, in that it is measuring the model's ability to carry out its immediate objective. As mentioned previously, modern language models are often evaluated according to their performance when used as components in a downstream task of some kind. 5 We find this increasing prevalence of extrinsic evaluation to be a very positive development, and do not in any way wish to argue against use of downstream tasks for evaluation. However, we see several limitations to an extrinsic-only evaluation paradigm, and argue for more robust intrinsic measures. 6 Extrinsic evaluation is necessarily dependent on the selection of specific benchmark tasks to include, and this process is fraught with difficulty, for several reasons. First, there are many possible benchmark tasks from which one could choose, each attempting to measure something different. Different authors will naturally choose different combinations of tasks when evaluating their language models, as they may be focused on different aspects of their models' behavior. While scientifically appropriate, this does make for a heterogeneous evaluation landscape, and complicates comparisons between published results. Second, new tasks are constantly being created, and existing tasks are regularly updated. This results in a complex and unstable evaluation landscape in which evaluation tasks change from year to year, and allows for much confusion around versions of datasets and benchmarks. Third, downstream NLP tasks and datasets often have their own issues around validity.",
"cite_spans": [
{
"start": 307,
"end": 308,
"text": "5",
"ref_id": null
},
{
"start": 611,
"end": 612,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Considerations",
"sec_num": "1.2"
},
{
"text": "For example, the commonly-used SNLI natural language inference corpus (Bowman et al., 2015) was later found to have substantial issues resulting from artifacts in how its annotations were collected (Gururangan et al., 2018) . How should one now assess a language model evaluated using this downstream task, knowing that the metrics may be of very limited validity? Finally, we note that widely-used and well-studied downstream evaluation tasks are often not available in \"low-resource\" languages, and so may not be an option in many scenarios. For these reasons, we believe that intrinsic measures should still play an important role in language model evaluation.",
"cite_spans": [
{
"start": 198,
"end": 223,
"text": "(Gururangan et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Considerations",
"sec_num": "1.2"
},
{
"text": "The question then becomes that of what to measure. Perplexity has the advantage of being well-understood and easy to calculate, and is closely linked to the standard cross-entropy training loss frequently used in language modeling. However, it has long been observed that perplexity itself often fails to correlate with downstream task performance (Iyer et al., 1997; Ito et al., 1999) , suggesting that it may have limited external validity as a metric.",
"cite_spans": [
{
"start": 348,
"end": 367,
"text": "(Iyer et al., 1997;",
"ref_id": "BIBREF12"
},
{
"start": 368,
"end": 385,
"text": "Ito et al., 1999)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Considerations",
"sec_num": "1.2"
},
{
"text": "There is an additional, more subtle limitation to the use of perplexity in cross-model comparison. As previously mentioned, many modern language models use sub-word units of prediction. One of the consequences of this heterogeneity is that evaluation metrics that relate to individual base-level prediction events (as is the case with perplexity) are not comparable across models, even if they are trained and evaluated on the same corpus: different tokenizations and vocabularies will result in different numbers of prediction events, as well as a differently-sized space of possible choices at each event. From the perspective of the perplexity metric, two models with different approaches to tokenization are performing fundamentally different and numerically incomparable tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Considerations",
"sec_num": "1.2"
},
{
"text": "Beyond this statistical problem, there is a problem with the underlying semantics of using perplexity as a measure when working with sub-word units. Any actual application of a language model that involves explicit word prediction 7 will ultimately demand not fragments of words, but rather entire words. In other words, even models whose native unit of prediction is at the sub-word level must make predictions that can eventually be able to be decoded into whole words at some point.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Considerations",
"sec_num": "1.2"
},
{
"text": "Given that, raw perplexity becomes a somewhat confusing evaluation metric, as the underlying phenomenon that it is measuring is quite distinct from the model's actual objective (i.e., predicting a whole word). Imagine, for instance, a model that predicts at the sub-word level, and now must predict a word given the history \"The tyrannosaurus was chased by the.\" The correct continuing word is \"velociraptor,\" and under the sub-word tokenization used by this model, this will necessitate several separate prediction events (as \"velociraptor\" is both a long and an infrequently-occurring word). From the perspective of the perplexity metric, however, there will be no difference between the first unit or the third. 8 Whatever the perplexity metric is telling us about the model's behavior during this process will likely tell us little about the model's ability to actually predict \"velociraptor\" given this particular word history.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Considerations",
"sec_num": "1.2"
},
{
"text": "We propose that intrinsic evaluation of language models be done in terms of the whole-word prediction task, regardless of the specific tokenization practices of any particular model. This would have the advantage of making cross-model comparison easier, and of the resulting metric bearing a closer resemblance to what we intuitively expect such a metric to capture (i.e., the model's performance at its primary objective). While computing perplexity at the level of whole words (see section 2) is a step in the right direction, we also propose several additional intrinsic metrics relating to the word prediction task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recentering on Words",
"sec_num": "1.3"
},
{
"text": "Word Prediction Accuracy We propose directly measuring and reporting the model's raw accuracy at word-level predictions (i.e., the proportion of words that were predicted correctly). This has the advantage over perplexity of grounding the number more closely to the concrete performance objective that we are concerned with. Furthermore, it is easily extended to account for various attributes of model behavior that may be of interest in terms of downstream tasks, while still remaining in the realm of intrinsic evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recentering on Words",
"sec_num": "1.3"
},
{
"text": "In the experiments we describe in section 2, we experiment with variations on this metric that capture different notions of \"accuracy.\" For example, we explore \"top n\" accuracy (i.e., if the target word is in within top n most likely predictions, that prediction counts as a \"hit\"). This could be of use in a text entry scenario, in which the model is responsible for generating candidate words for further selection or refinement by an end-user (as in a mobile phone keyboard application). Many other possible downstream tasks for language models involve techniques that would also benefit from having the target word given better placement in the ranked prediction space, and thus would benefit from a metric explicitly measuring this property.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recentering on Words",
"sec_num": "1.3"
},
{
"text": "\"Soft-Match\" Prediction Accuracy We propose extending simple prediction accuracy to allow for \"near miss\" predictions, where the predicted word is \"similar\" to the target (for a specified definition of \"similar\"). In many applications of language modeling, there may be multiple possible valid predictions. This problem has long been understood in the context of machine translation evaluation; in their description of the motivation behind the METEOR metric, Lavie and Denkowski (2009) addressed the \"problem of reference translation variability by utilizing flexible word matching, allowing for morphological variants and synonyms to be taken into account as legitimate correspondences.\" In a word prediction task, we could allow an explicit synonym to count as a correct prediction; depending on the application or domain in question, one could use external language resources to model much more complex and task-specific notions of similarity (e.g., in a biomedical NLP context, one might give the model credit at evaluation time for predicting a medication that is from the same functional class as the target).",
"cite_spans": [
{
"start": 460,
"end": 486,
"text": "Lavie and Denkowski (2009)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recentering on Words",
"sec_num": "1.3"
},
{
"text": "In the experiments described in section 2.4.2, we use a method based on word neighborhoods in an embedding space. Depending on the nature of the task under consideration, other features could be used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recentering on Words",
"sec_num": "1.3"
},
{
"text": "Or, consider a typing task in a morphologically rich language, in which a user might be willing to accept predictions that involve the correct lexeme but with an incorrect inflection. Allowing for this sort of flexibility in the evaluation of a word prediction model has the potential to greatly increase the ecological validity of the experiment, in that, that the experimenter is able to easily encode their own task-specific notions of relevance while still staying in a fairly constrained and easy-to-analyze evaluation setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recentering on Words",
"sec_num": "1.3"
},
{
"text": "One important limitation of raw classification accuracy as a metric is its susceptibility to being biased by imbalanced class distributions. For example, if some classes occur much more frequently than others, a model may achieve a high accuracy score by learning to focus on these frequent classes to the exclusion of infrequent ones. In written language, the distribution of classes (i.e., of word types) are notoriously skewed (Zipf, 1935) , and exhibit a \"long tail\" of words that occur relatively infrequently, with a small set of \"head\" words that make up a large proportion of individual tokens observed in the training and test data.",
"cite_spans": [
{
"start": 430,
"end": 442,
"text": "(Zipf, 1935)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Frequency & Diversity",
"sec_num": null
},
{
"text": "We observe that language models often exhibit very different performance characteristics when predicting more common types than less common types; in fact, our experiments in this paper demonstrate that, for some commonly-used language models, the actual number of infrequent types that are ever correctly predicted is surprisingly small (see section 3.1). This over-emphasis on frequent types, when carried forward into downstream generation tasks, may lead to the failure mode described by Holtzman et al. (2020) in which generated text is \"dull and repetitive.\" This phenomenon is not limited to words alone; morphologically-rich languages (MRLs) exhibit a similar Zipfian distributional pattern in terms of the occurrence of different morphological phenomena, which in turn affects the performance of systems designed to process such features of language (Czarnowska et al., 2019; Tsarfaty et al., 2020) . We believe that this behavior can be explained through the lens of the bias-variance tradeoff common to all statistical learning problems. As observed by Lazaridou et al. (2015) , neural models have a tendency towards the \"bias\" end of that tradeoff, which in the context of language modeling results in a strong preference for head words and against tail words. This is a serious enough problem in machine translation and text generation systems that there is a growing body of literature looking at ways to increase the lexical diversity in model output. Some authors (Li et al., 2016; Welleck et al., 2020) have examined training strategies and loss functions that optimize for diverse output, while others (Vijayakumar et al., 2016; Ippolito et al., 2019) focus on alternatives to greedy decoding and identify several ways to generate more diverse sequences of words. Questions of evaluation arise, as the construct of \"diversity\" itself is surprisingly difficult to characterize, as pointed out by Tevet and Berant (2020) .",
"cite_spans": [
{
"start": 492,
"end": 514,
"text": "Holtzman et al. (2020)",
"ref_id": "BIBREF9"
},
{
"start": 859,
"end": 884,
"text": "(Czarnowska et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 885,
"end": 907,
"text": "Tsarfaty et al., 2020)",
"ref_id": "BIBREF28"
},
{
"start": 1064,
"end": 1087,
"text": "Lazaridou et al. (2015)",
"ref_id": "BIBREF14"
},
{
"start": 1480,
"end": 1497,
"text": "(Li et al., 2016;",
"ref_id": "BIBREF15"
},
{
"start": 1498,
"end": 1519,
"text": "Welleck et al., 2020)",
"ref_id": "BIBREF30"
},
{
"start": 1620,
"end": 1646,
"text": "(Vijayakumar et al., 2016;",
"ref_id": "BIBREF29"
},
{
"start": 1647,
"end": 1669,
"text": "Ippolito et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 1913,
"end": 1936,
"text": "Tevet and Berant (2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Frequency & Diversity",
"sec_num": null
},
{
"text": "In the context of our word prediction task, we propose two evaluation measures that account for the Zipfian skew in type distributions, and illuminate differences in model performance across the type frequency spectrum. First, we propose stratifying our evaluation of prediction accuracy by frequency, such that we separately measure the model's ability to predict occurrences of high-, mid-, and low-frequency types (stratified token coverage). Second, we propose measuring the overall proportion of possible types that the model was able to predict at least once during eval-uation (type coverage, also stratified by frequency).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Frequency & Diversity",
"sec_num": null
},
{
"text": "In this section we describe a series of experiments in which we use our proposed evaluation metrics to explore the behavior of several widely-used and large-scale language models (obtained using the HuggingFace (Wolf et al., 2019) Transformers library). Specifically, we examine GPT-2 (Alec et al., 2019) (gpt-2), GPT (Alec et al., 2018) (openai-gpt), RoBERTa (Liu et al., 2019) (roberta-base), and BERT (Devlin et al., 2019) (bert-base-uncased).",
"cite_spans": [
{
"start": 211,
"end": 230,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF31"
},
{
"start": 318,
"end": 337,
"text": "(Alec et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 360,
"end": 378,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 404,
"end": 425,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "Since the pre-trained models were all trained in widely varying ways on different corpora, we ran each model through a single pass of fine-tuning on a common corpus to attempt to bring them more closely into alignment. For this fine-tuning (and for the ensuing experiments), we used WikiText 103 (Merity et al., 2016) , which consists of a large (n=28,475) training set of English-language Wikipedia articles and a small (n= 60) test set of 60 articles, with one sentence per line. The fine-tuning task was on a word prediction task in a unidirectional fashion, in which the context is based only past history (i.e., not on future tokens). 9 We note that for BERT and RoBERTa, this usage does differ somewhat from the prediction paradigm under which they were trained, which is implicitly bidirectional.",
"cite_spans": [
{
"start": 296,
"end": 317,
"text": "(Merity et al., 2016)",
"ref_id": null
},
{
"start": 640,
"end": 641,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training & Datasets",
"sec_num": "2.1"
},
{
"text": "As previously described, modern language models typically use sub-word/sub-sentences units as their native unit of prediction. In order to perform a meaningful evaluation of cross-model word prediction accuracy, it is necessary to obtain word-level predictions, which for the mentioned models may involve more than one model-level prediction event. The models we worked with in this set of experiments used two different tokenization strategies (wordpieces for GPT and BERT, and BPE for GPT-2 and RoBerta), and as such we developed algorithms for decoding whole words by sequentially decoding individual sub-word units. While the algorithms differ slightly in their implementation between the model families, the overall method is similar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Whole-word decoding",
"sec_num": "2.2"
},
{
"text": "Our single-word decoding algorithm extracts the first word candidate by the model through con-catenating tokens until end-of-word is indicated, 10 and then compared with a target word (see App. B, Algorithm 1). To extract multiple candidate words, given a target word we run a Depth-First Search to find whether a valid path of tokens exist, having each model prediction spanning its top ten guesses (App. B, Algorithm 2). This is not a typical beam-search based on likelihoods, but rather is based on the existence of valid units (in the first K options) for a given target word, simulating user choices given a context. 11",
"cite_spans": [
{
"start": 144,
"end": 146,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Whole-word decoding",
"sec_num": "2.2"
},
{
"text": "In addition to decoding whole words, we would like to be able to obtain a probability estimate of the resulting prediction, for use in computing a word-level perplexity measure. We approximate this by taking the product of the prediction-level probabilities (i.e., the model's estimate of the probability of each constituent unit in a given decoded word), which we can then use for a perplexity-like score:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Whole-word decoding",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "ppx=\u2212 words p(w)log( units q(u))",
"eq_num": "(2)"
}
],
"section": "Whole-word decoding",
"sec_num": "2.2"
},
{
"text": "We performed a word-level prediction experiment on the test dataset described in Section 2.2, using each of the models in Section 2. For each test example, we performed incremental unidirectional word prediction using Algorithm 1 to generate whole-word predictions. In other words, for each test example W comprised of w 1 ...w n words, we queried the model n \u2212 1 times, to predict\u0175 i = argmax",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "2.3"
},
{
"text": "w P (w i |w 1:i\u22121 ) for i \u2208 [2,n].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "2.3"
},
{
"text": "Additionally, we used Algorithm 2 to decode the top k ranked word predictions (for k = 10),\u0175 k t . In other words, for the test input \"the dinosaur ate the ...\" we would sequentially predict p(w 2 |\"the\"), p(w 3 |\"the dinosaur\"), and so on. At each prediction event, we compared the predicted w t to the ground-truth w t according to the various metrics described in the next section. We counted as \"hits\" word-level prediction events where the comparison matched (for the different definitions of \"matched\"), and \"misses\" otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "2.3"
},
{
"text": "We measure token-level prediction accuracy 12 using an exact-match criterion, top 1 . In other words, if w t =\u0175 t , a \"hit\" is counted; otherwise, a miss. We also computed a higher-recall metric top k , in which a \"hit\" is counted if w t \u2208\u0175 k t -i.e., if the target word is in the top k predictions, it counts as a \"hit.\" For our experiments, we computed top 10 (i.e., k =10).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Accuracy",
"sec_num": "2.4.1"
},
{
"text": "As described in section 1.3, there are a number of criteria by which one might implement a soft-matching algorithm. From the perspective of evaluation, the key is to design a criterion in such a way as to capture the aspect of user behavior that one may wish to support.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft-Match Accuracy",
"sec_num": "2.4.2"
},
{
"text": "We performed our soft-matching experiments with a text entry scenario in mind, in which a user is able to choose among the language model's top n predictions. Under this scenario, if the model fails to predict the target word but instead predicts a related word (a synonym, perhaps), the user may still be able to convey their message. To simulate this, we may define the soft-match operation as follows: For our experiments here, we used a method based on similarity in word embedding space, under the theory that words with similar embeddings may be (relatively) appropriate substitutions in a word prediction task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft-Match Accuracy",
"sec_num": "2.4.2"
},
{
"text": "SoftMatch",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft-Match Accuracy",
"sec_num": "2.4.2"
},
{
"text": "We used the word2vec algorithm (Mikolov et al., 2013) to train 50-dimensional word embeddings on the \"train\" subset of the WikiText-103 corpus. We then defined our softmatch similarity function s knn (a,b) = a \u2208 knn(b), where knn(b,k) retrieves the k nearest neighbors of target word b in the embedding space. Using our softmatch function, we then re-scored the prediction accuracy such that a positive softmatch counted as a \"hit.\" We used the Annoy library (Bernhardsson, 2018) to perform efficient nearest-neighbor retrieval. We conducted experiments in which we varied the k parameter; in other words, by allowing a match deeper into the k-nearest neighbors of the target. Our motivation for this was that, ceteris paribus, a model that mis-predicts a target but at least guesses something that lies in the right semantic neighborhood is more useful than one that does not.",
"cite_spans": [
{
"start": 31,
"end": 53,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF19"
},
{
"start": 459,
"end": 479,
"text": "(Bernhardsson, 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Soft-Match Accuracy",
"sec_num": "2.4.2"
},
{
"text": "In order to measure type diversity given all the hits in top 1 /top 10 , we counted how many unique types were correctly predicted for the first and top-ten guesses and present it in T 1 (T 10 ) respectively. To illustrate the utility of measuring the rate of unique types that were correctly predicted, consider a hypothetical dataset in which 20% of the tokens consist of the word the, and that the model at hand predicts only this word for every sample in the test set. In this scenario, top 1 accuracy will be 20%, as the is a correct prediction for 20% of the times, yet T 1 is based on only one type 13 as there was only a single type that was correctly predicted-suggesting sub-optimal learning of the input distribution, or a lack on the model's ability to reflect that distribution during test. Table 1 describes the results on the different models. GPT-2, and GPT, that were pre-trained for word prediction task exhibited the lowest ppx. GPT-2 had the highest hit rate, and type diversity. However, when comparing GPT, to RoBerta, while accuracy seems to present a similar performance, and the ppx is lower for GPT, RoBerta is found to be much more diverse than GPT, suggesting that the similar hit rates (28.18, 29.37) can be attributed to different reasons as shown by their different performance over T x metric. On the other hand, we can also learn that while Bert, and GPT share similar diversity rate (prediction diversity), GPT exhibits a higher prediction accuracy making for a different accuracy/diversity ratio than Bert, which may also suggest a different prediction behavior than of Bert's. To understand type diversity we must explore which types were predicted well and which types were harder. To this end, we stratified both T 1 , and top 1 as a function of frequency; high, mid, and low, for x \u2208 [10 3 ,inf), x \u2208 [10 2 ,10 3 ), x\u2208[10 1 ,10 2 ) where x is each target type's frequency. Figure 1 describes the type distribution reflecting high diversity for both GPT-2, and RoBerta, while having GPT-2 picking on the low-bin twice as many than RoBerta. Notice the stark difference between RoBerta, and GPT, RoBerta outperformed GPT across every bin, illustrating its diversity strength (given the similar hit rate shown earlier). While performing worse, both GPT, and Bert, seem to share similar rates of diversity, with GPT, performing almost twice as many on the lowest bin. Finally, even GPT-2 that attained the highest diversity, was covering only 50%, 14%, and 7% of the trained types we evaluated on. This shows there is room for improvement to reflect more optimally the input data's distribution. showing that while not so different in diversity, RoBerta is missing the hits mostly from the most frequent bin 10% gap, and a sub-optimal prediction in the mid-and low-bins. The similar hit rate of RoBerta, and GPT, clearly is distributed differently having RoBerta reaching parts of the long tail of the distribution more often than GPT. Bert, and GPT, also exhibit the biggest gap in the most frequent bin with 8% difference, while the mid and low bins are similar. Overall evaluating prediction diversity can inform us about the model's priorities. Through measuring type diversity, we learn that models that share similar hit rates, can be vary immensely in diversity, which later on may impact downstream tasks. Evaluating diversity could not only inform us to what degree the learned distribution is reflected, but could directly point at the missing types, and the weaknesses of the model. Since all these models are shown to be weaker in the lower bins, or biased by frequency, our community can benefit if we start addressing this problem, which indirectly would contribute to higher accuracies as well. In Section 4 we illustrate in a case study why learning diverse types, and low-frequency types in particular can be useful. Next, we present a way to further understand our models, even if the target word was not found directly in a prediction. Figure 3 illustrates GPT-2, and GPT's T 1 , and top 1 performance on left (bars) and right (line) axes. Both models gradually (@3-@100) capture more types as the beam of k in knn was increased (considering more target-neighbors), leading to increased hits. This evaluation shows that GPT-2 exact match (@1) are higher, but that its misses can enrich the pool of unique types with 14% (@100) additional unique types (light blue), whereas, GPT-2 covered only 11% more types (light pink), while both models increase in accuracy is similar. This reinforces that the models' prediction mechanism is slightly different, as similar gains in accuracy are translated to either more of learned high types or more diverse patterns shown for the models in Figures 1, 2 . This analysis teaches us that even if there were mis-matches some of them were actual near misses, and are related to what it was expected to predict, which as mentioned can be of practical use for different users, or for analyzing how wrong were the mis-matches as part of an error analysis process.",
"cite_spans": [],
"ref_spans": [
{
"start": 804,
"end": 811,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1912,
"end": 1920,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 3989,
"end": 3997,
"text": "Figure 3",
"ref_id": "FIGREF4"
},
{
"start": 4733,
"end": 4745,
"text": "Figures 1, 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Lexical Diversity",
"sec_num": "2.4.3"
},
{
"text": "In this section we will look at the impact of model inference performance on the particular downstream task of paraphrasing. To this end we employed a SotA algorithm, Bertscore (Zhang et al., 2019) , to compute similarity scores of sentence pairs in part by comparing embeddings derived from a language model. Under Bertscore, higher similarity scores indicate greater semantic similarity of a pair of sentences, such that one is a closer paraphrase of the other. We would like to stress that the critique that may be risen at the end of this section is not about Bertscore tool as such, but are rather about a certain type of pattern that the models that are employed by this tool may have insufficiently learned.",
"cite_spans": [
{
"start": 177,
"end": 197,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A case study of Paraphrasing",
"sec_num": "4"
},
{
"text": "Why Paraphrasing? We choose the downstream task of paraphrasing to measure semantic similarity of sentence pairs as it can be easily manipulated to consider a single word modification. Consider the following example sentence involving the word triple: (poodle, dog, cat) 14 (a) which dog has longer hair ?",
"cite_spans": [
{
"start": 271,
"end": 273,
"text": "14",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A case study of Paraphrasing",
"sec_num": "4"
},
{
"text": "(b) which cat has longer hair ? (c) which poodle has longer hair ?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A case study of Paraphrasing",
"sec_num": "4"
},
{
"text": "The pair (a, b) ought to score lower (i.e., be considered by Bertscore to be more dissimilar) than the pair (a,c), as a is a valid paraphrase of c while b is not. This, of course, assumes that the language model being used as the underlying source of embeddings for the Bertscore algorithm has accurately captured the semantic meaning of the three words under consideration. If not, we may see an inversion of results such that (a,b) appears (incorrectly) more similar than (a,c), suggesting that the model in question should perhaps not be used for paraphrase-related tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A case study of Paraphrasing",
"sec_num": "4"
},
{
"text": "To explore the impact of word frequency on model representations with our fine-tuned models, we have generated 50 rare, and 50 common triples elicited from wiki-103 trainset. Each of the triples contains a rare/common word, its hypernym, and a sibling hypernym extracted from WordNet (Miller, 1995) (using nltk (Loper and Bird, 2002) ",
"cite_spans": [
{
"start": 284,
"end": 298,
"text": "(Miller, 1995)",
"ref_id": "BIBREF20"
},
{
"start": 311,
"end": 333,
"text": "(Loper and Bird, 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A case study of Paraphrasing",
"sec_num": "4"
},
{
"text": ") (x r ,x h ,x a ), (x c ,x h ,x a )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A case study of Paraphrasing",
"sec_num": "4"
},
{
"text": "respectively. For each word in a triple, we identified a sentence in which the rare word naturally occurs, and generated probe sentences in which we replace the rare word with x h and x a 15 . We then used bertscore to compare our sentences in terms of their similarity. In principle, we expect that bertscore(s(x h ),s(x r )) > bertscore(s(x h ),s(x a )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A case study of Paraphrasing",
"sec_num": "4"
},
{
"text": "In other words, a similarity score for the pair made of a sentence with dog and the sentence with poodle is expected to be higher than than the pair made of a sentence with dog and a sentence with cat, as dog and a poodle are closer semantically, than a dog and a cat, and therefore would be a closer paraphrase of each other. Alternatively, if the model's word representations are being confounded by lexical frequency, we may instead observe the opposite pattern (i.e., the sentences with the more common words mistakenly appearing to be more similar to one another, despite their semantic difference). We consider cases in which the model correctly identifies the paraphrase (e.g., if bertscore(s(dog), s(poodle)) > bertscore(s(dog), s(cat))) as hits, and misidentifications as misses. Our \"null hypothesis\" is that there should not be any difference in hit rate between high-and low-frequency words (i.e., word frequency should not affect the model's ability to identify paraphrases). Furthermore, we compare the performance of two fine-tuned models trained on wiki-103, Bert and RoBerta, and (given the results of our earlier experiments) we hypothesize that if there is a difference in hit rate, RoBerta will prove more robust to the rare words condition, given its superior performance at predicting (and thus representing) rare words. We note that this is something of a \"toy\" experiment, given its small size, which limits the conclusions that we can draw. However, in Table 2 , we do see a greater difference in performance between the rare set of words to the common, such that the models do appear to be failing to capture the semantics of rare words, as reflected in the greater number of misses (\u03c7 2 ; p<0.001 for both models).",
"cite_spans": [],
"ref_spans": [
{
"start": 1478,
"end": 1485,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "A case study of Paraphrasing",
"sec_num": "4"
},
{
"text": "We found that RoBerta and Bert did not differ greatly in their performance, suggesting that with this method the strong effect of word frequency outweighed the between-model difference observed in our earlier experiments. This null result could easily be an artifact of our very small sample size of 100 probe sentences, though, and we also did notice a substantial number of misses with the set of common words. Overall, despite this being based only on a small sample, it does seem that the lower performance of both models on the rare words is unlikely to be a coincidence. We hope to be able to experiment with a greater sample size to begin learning more about the degree to which rare-word inferences are reliable to produce outcomes aligned with human semantics on various downstream tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A case study of Paraphrasing",
"sec_num": "4"
},
{
"text": "The paraphrasing task should be conducted at a larger scale. Furthermore, we hope to continue evaluating language models' prediction diversity and its effects on additional downstream tasks (for example, tasks where human speech is anticipated), since prediction diversity evaluation may vary between one task to another. The unit of evaluation can go beyond words, and may be defined at various textual granularities, such as phrases, for instance, depending on prediction diversity desired. We also leave for future work questions of to what degree different tokenization approaches, or model size effect prediction diversity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "5"
},
{
"text": "In our experiments, we did observe differences in performance between models with different tokenization strategies (e.g. GPT-2 and RoBerta as compared with their architectural counterparts); however, these models also varied substantially from one another in other respects (size, etc.) and as such it is difficult to attribute this performance gap to tokenization alone. It may also be the case that the bigger the model (in terms of number of parameters), the more diverse it is likely to be; under this hypothesis, we would expect today's ever-larger models (e.g. GPT-3) to outperform their predecesors in terms of diversity. However, we do not believe that it is sustainable (Strubell et al., 2019; Schwartz et al., 2019) to rely on increasing model complexity as an approach to addressing the frequency-related challenges that we observed in our experiments, and believe that fundamentally different approaches to language model training are needed.",
"cite_spans": [
{
"start": 680,
"end": 703,
"text": "(Strubell et al., 2019;",
"ref_id": "BIBREF26"
},
{
"start": 704,
"end": 726,
"text": "Schwartz et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "5"
},
{
"text": "We presented two types of evaluation techniques to learn about the performance of the model across its input distribution, revealing the easier and the challenging areas to learn. Through this analysis we showed that current models are susceptible to frequency bias during training, and as a result under-performing when less frequent examples are encountered at test time, hurting the overall performance. In addition, we proposed a way to learn about the degree to which a model's prediction is semantically close to a target in cases where an exact match was not predicted, which may more accurately reflect a model's usefulness. Thirdly, we showed how a downstream task of paraphrasing may be rendered less reliable, as the models employed struggle to produce semantically-useful representations when rare words are involved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We believe that language models should reflect the trained distribution more optimally than what we observed in our evaluation, and we should recognize their bias to frequency -making them unfair towards some words, and potentially harmful for our downstream tasks. We also believe it is important to take part in setting benchmarks for models' diversity. Finally, distributional representation goes beyond words, and we hope to address more complicated representational tasks as well. return False 28: end procedure partition of the Wiki-103 corpus with that found in WordNet (using the NLTK package (Loper and Bird, 2002) ), and then further filtered for vocabulary items with WordNet entries exhibiting the desired linguistic relationship. (A synonym/antonym construction could also have been chosen alternatively).",
"cite_spans": [
{
"start": 601,
"end": 623,
"text": "(Loper and Bird, 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The frequency dynamic for the rare/common triples (x r /x c ,x h ,x a ) was (low, mid/high,mid/high) and for the common (mid/high, mid/high, mid/high) respectively. Words occurring fewer than 50 times in the Wiki-103 training partition were categorized as \"low,\" and were categorized as \"mid/high\" otherwise. Finally, a human annotator manually identified an appropriate context sentence for each target word via online search across the following webbased dictionaries: merriam-webster.com, thesaurus.com, sentencedict.com, and dictionary.cambridge.org.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Here is a rareword triple example following (x r ,x h ,x a ) order. Here is a common-word triple example following (x c ,x h ,x a ) order (a) Because of the poor economy, the factory will immediately discontinue operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "(b) Because of the poor economy, the factory will immediately cease operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "(c) Because of the poor economy, the factory will immediately continue operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The complete sentence list can be found at https:// github.com/shiranD/word_level_evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Galliers and Sp\u00e4rck Jones (1993) refer to this as the model's \"function\" (in contrast to its \"objective.\")6 In this, we followIto et al. (1999), who, writing about language models in the context of their use in ASR systems, warned against relying solely on evaluation metrics that were specific to that task (specifically, word error rate).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For whatever definition of \"word\" is appropriate in the language under consideration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Or, for that matter, from the previous token, \"the.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Bert and Roberta were given a '[MASK]' token at the end on a sequence to ensure unidirectional prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The code is available for all model types we present in this paper, and for the different tokenization approaches by which they are trained.11 Code at https://github.com/shiranD/word_ level_evaluation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For tokens-i.e., words-in the test set, as opposed to tokens from the perspective of the model being evaluated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "T1 is the relative percentage of one over the overall number of types",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that the word poodle occurs much less frequently in English than either dog or cat.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "see Appendix C for details on sentence selection and generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their insightful comments and suggestions. This work was supported by the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health under award number R01DC015999.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "The bin's assignment is based on the words' frequency of the trainset, but the bins can only be based on the intersection of the high-freq words in train, and all the words in test set. Any high/mid/low-freq train word that occurs in the test will be assigned to its appropriate bin. Code is provided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Stratified Bins",
"sec_num": null
},
{
"text": "for computing first and first ten guessescxt\u2190w