ACL-OCL / Base_JSON /prefixA /json /aacl /2020.aacl-demo.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:13:41.317631Z"
},
"title": "NLPStatTest: A Toolkit for Comparing NLP System Performance",
"authors": [
{
"first": "Haotian",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {
"settlement": "Seattle",
"country": "USA"
}
},
"email": ""
},
{
"first": "Denise",
"middle": [],
"last": "Mak",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {
"settlement": "Seattle",
"country": "USA"
}
},
"email": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Gioannini",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {
"settlement": "Seattle",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {
"settlement": "Seattle",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Statistical significance testing centered on pvalues is commonly used to compare NLP system performance, but p-values alone are insufficient because statistical significance differs from practical significance. The latter can be measured by estimating effect size. In this paper, we propose a three-stage procedure for comparing NLP system performance and provide a toolkit, NLPStatTest, that automates the process. Users can upload NLP system evaluation scores and the toolkit will analyze these scores, run appropriate significance tests, estimate effect size, and conduct power analysis to estimate Type II error. The toolkit provides a convenient and systematic way to compare NLP system performance that goes beyond statistical significance testing.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Statistical significance testing centered on pvalues is commonly used to compare NLP system performance, but p-values alone are insufficient because statistical significance differs from practical significance. The latter can be measured by estimating effect size. In this paper, we propose a three-stage procedure for comparing NLP system performance and provide a toolkit, NLPStatTest, that automates the process. Users can upload NLP system evaluation scores and the toolkit will analyze these scores, run appropriate significance tests, estimate effect size, and conduct power analysis to estimate Type II error. The toolkit provides a convenient and systematic way to compare NLP system performance that goes beyond statistical significance testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the field of natural language processing (NLP), the common practice is to use statistical significance testing 1 to demonstrate that the improvement exhibited by a proposed system over the baseline reflects meaningful differences, not happenstance (Dror et al., 2018 (Dror et al., , 2020 . The American Statistical Association emphasizes that ''a p-value, or statistical significance, does not measure the size of an effect or the importance of a result\" (Wasserstein and Lazar, 2016) . In other words, statistical significance is different from practical significance. The latter is rarely discussed in the NLP field.",
"cite_spans": [
{
"start": 251,
"end": 269,
"text": "(Dror et al., 2018",
"ref_id": "BIBREF4"
},
{
"start": 270,
"end": 290,
"text": "(Dror et al., , 2020",
"ref_id": "BIBREF5"
},
{
"start": 458,
"end": 487,
"text": "(Wasserstein and Lazar, 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address this issue, we propose a three-stage procedure for comparing NLP system performance, shown in Figure 1 . The first stage is building an NLP system and using prospective power analysis to compute an appropriate sample size for test corpus. The second stage is hypothesis testing. We stress the need for data analysis to verify assumptions made by significance tests and the importance of estimating the effect size and conducting power analysis. The last stage is to report various results produced by the second stage.",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 113,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To automate the process, we provide a toolkit, NLPStatTest. We introduce the three-stage comparison procedure ( \u00a72), and then describe the the main components ( \u00a73) and implementation details ( \u00a74) of NLPStatTest. We also present experimental results for running the system on both real-world and simulated data ( \u00a75). Lastly, we compare NLPStatTest with existing statistical testing toolkits ( \u00a76).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section we briefly describe the three-stage comparison procedure and define terms that are relevant to NLPStatTest. More detail about Stage 2 can be found in \u00a73- \u00a74.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing NLP System Performance",
"sec_num": "2"
},
{
"text": "The first stage is to build an NLP system, run it on test data, and compare the system output with a gold standard. The output of this stage is a list of numerical values such as accuracy or F-scores. Definition 1 (Evaluation unit). Let (x j , y j ) be a test instance. An evaluation unit (EU) e = {(x j , y j ), j = 1, \u2022 \u2022 \u2022 , m} is a set of test instances on which an evaluation metric can be meaningfully defined. A test set is a set of EUs. Definition 2 (Evaluation metric). Given an NLP system A, the evaluation metric M is a function that maps an EU e to a numerical value:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building an NLP System",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M A (e) = M { \u0177 j , y j , j = 1, \u2022 \u2022 \u2022 , m}",
"eq_num": "(1)"
}
],
"section": "Building an NLP System",
"sec_num": "2.1"
},
{
"text": "where\u0177 j = A(x j ) is the system output of A given x j , and m is the size of e (i.e., the number of test instances in e). The three-stage procedure for comparing NLP system performance. The pink flag boxes are the parameters that users can either set or use the default values provided by NLPStatTest. The blue hexagons are system output of NLPStatTest. \u03b1 1 and \u03b1 2 are the significance levels for normality test and statistical significance test respectively. EU stands for evaluation unit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building an NLP System",
"sec_num": "2.1"
},
{
"text": "An EU may contain one or more test instances. For example, a BLEU score can be computed on one or more sentences. The EU size affects sample size, p-value, sample standard deviation, effect size and so on. It is therefore one of the parameters that users can set when using NLPStatTest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building an NLP System",
"sec_num": "2.1"
},
{
"text": "The second stage is the comparison stage which has four steps (see the largest box in Figure 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 94,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The Comparison Stage",
"sec_num": "2.2"
},
{
"text": "When we compare two NLP systems A and B, the output of Stage 1 is a set of pairs, {(M A (e i ), M B (e i ))}, where e i is the i th EU, and M A (e) (similarly M B (e)) is defined in Equation 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Analysis",
"sec_num": "2.2.1"
},
{
"text": "Many statistical tests make certain assumptions about the sample (e.g., normality for t test), so it is important to conduct data analysis to verify those assumptions in order to choose significance tests that are appropriate for a particular sample. If the sample does not follow any known distribution, non-parametric tests should be used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Analysis",
"sec_num": "2.2.1"
},
{
"text": "NLPStatTestwill estimate sample skewness and test for normality. Then NLPStatTest will choose a test statistic (mean or median) for users and recommend a list of significance tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Analysis",
"sec_num": "2.2.1"
},
{
"text": "The second step in Stage 2 is statistical significance testing, using two mutually exclusive hypotheses: the null hypothesis H 0 and the alternative H 1 . To compare two NLP systems, a (paired) two-sample test is usually used, though one-sample testing of pairwise difference is equivalent. NLPStatTest currently only considers paired two-sample testing for numerical data. Observations within a sample are assumed to be independent and identically distributed (i.i.d.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Significance Testing",
"sec_num": "2.2.2"
},
{
"text": "To run a significance test, users first choose the direction of the test: left-sided, right-sided or two-sided. Then, users specify the hypothesized value of test statistic difference \u03b4 and the significance level \u03b1, which is often set to 0.05 or 0.01 in the NLP field, and choose a test from the list. NLPStatTest will calculate the p-value and reject H 0 if and only if p < \u03b1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Significance Testing",
"sec_num": "2.2.2"
},
{
"text": "In most experimental NLP papers employing significance testing, the p-value is the only quantity reported. However, the p-value is often misused and misinterpreted. For instance, statistical significance is easily conflated with practical significance; as a result, NLP researchers often run significance tests to show that the performances of two NLP systems are different (i.e., statistical significance), without measuring the degree or the importance of such a difference (i.e., practical significance). Cohen (1990) noted \"the null hypothesis, if taken literally, is always false in the real world.\" For instance, because evaluation metric values of two NLP systems on a test set are almost never exactly the same, H 0 that two systems perform equally is (almost) always false. When H 0 is false, the p-value will eventually approach zero in large samples (Lin et al., 2013) . In other words, no mat-ter how tiny the system performance difference is, there is always a large enough dataset on which the difference is statistically significant. Therefore, statistical significance is markedly different from practical significance.",
"cite_spans": [
{
"start": 508,
"end": 520,
"text": "Cohen (1990)",
"ref_id": "BIBREF2"
},
{
"start": 861,
"end": 879,
"text": "(Lin et al., 2013)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size Estimation",
"sec_num": "2.2.3"
},
{
"text": "One way to measure practical significance is by estimating effect size, which is defined as the degree to which the 'phenomenon' is present in the population: the degree to which the null hypothesis is false (Cohen, 1994) . While the need to estimate and report effect size has long been recognized in other fields (Tomczak and Tomczak, 2014) , the same is not true in the NLP field. We include several methods for estimating effect size in NLPStatTest (see \u00a73.3).",
"cite_spans": [
{
"start": 208,
"end": 221,
"text": "(Cohen, 1994)",
"ref_id": "BIBREF3"
},
{
"start": 315,
"end": 342,
"text": "(Tomczak and Tomczak, 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size Estimation",
"sec_num": "2.2.3"
},
{
"text": "There are two types of errors in hypothesis testing: Type I errors (false positives) and Type II errors (false negatives). The Type I error of a significance test, often denoted by \u03b1, is the probability that, when H 0 is true, H 0 is rejected by the test. The Type II error of a significance test, usually denoted by \u03b2, is the probability that under H 1 , H 1 is rejected by the test. While Type I error can be controlled by predetermining the significance level, Type II error can be controlled or estimated by power analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Power Analysis",
"sec_num": "2.2.4"
},
{
"text": "Definition 3 (Statistical power). The power of a statistical significance test is the probability that under H 1 , H 0 is correctly rejected by the test. The power of a test is 1 \u2212 \u03b2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Power Analysis",
"sec_num": "2.2.4"
},
{
"text": "Higher power means that statistical inferences are more correct and accurate (Perugini et al., 2018) . While power analysis is rarely used in the NLP field, it is considered good or standard practice in some other scientific fields such as psychology and clinical trials in medicine (Perugini et al., 2018) . We implement two methods of conducting power analysis in NLPStatTest(see \u00a73.4).",
"cite_spans": [
{
"start": 77,
"end": 100,
"text": "(Perugini et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 283,
"end": 306,
"text": "(Perugini et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Power Analysis",
"sec_num": "2.2.4"
},
{
"text": "Beyond the p-value, it is important to report other quantities to make the studies reproducible and available for meta-analysis, including the name of significance test used, the predetermined significance level \u03b1, effect size estimate/estimator, the sample size, and statistical power.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reporting Test Results",
"sec_num": "2.3"
},
{
"text": "NLPStatTest is a toolkit that automates the comparison procedure. It has four main steps, shown in the large box in Figure 1 . To use NLPStatTest, users provide a data file with the NLP system performance scores produced in Stage 1. NLPStatTest will prompt users to either modify or use the default values for the parameters in the pink flags and then produce the output in the blue hexagons. The users can then report (some of) the output in Stage 3 of the comparison procedure.",
"cite_spans": [],
"ref_spans": [
{
"start": 116,
"end": 124,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "System Design",
"sec_num": "3"
},
{
"text": "The first step of the comparison stage is data analysis, and a screenshot of this step is shown in Figure 2 . The top part (above the Run button in the purple box) shows the input and parameters that the user needs to provide, and the bottom part (below the Run button in the green box) shows the output of the data analysis step. ",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 108,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Data Analysis",
"sec_num": "3.1"
},
{
"text": "To compare two NLP systems, A and B, users need to provide a data file where each line is a pair of numerical values. There are two scenarios. In the first scenario, the pair is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Input Data File",
"sec_num": "3.1.1"
},
{
"text": "(u i , v i ), where u i = M A (e i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Input Data File",
"sec_num": "3.1.1"
},
{
"text": "is the evaluation metric value (e.g., accuracy or Fscore) of an EU e i given System A (see Equation 1), and v i = M B (e i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Input Data File",
"sec_num": "3.1.1"
},
{
"text": "In the second scenario, if u i and v i can be calculated as the mean or the median of the evaluation metric values of test instances in e i , users can upload a data file where each line is a pair of (a k , b k ), where a k and b k are the evaluation metric values of a test instance t k given System A and B, respectively. Users then chooses the EU size m and specifies whether the EU metric value should be calculated as the mean or the median of the metric values of the instances in the EU. NLPStatTest will use m adjacent lines in the file to calculate u i and v i . If users prefers to randomly shuffle the lines before calculating u i and v i , they can provide a seed for random shuffling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Input Data File",
"sec_num": "3.1.1"
},
{
"text": "From the (u i , v i ) pairs, NLPStatTest generates descriptive summary statistics (e.g., mean, median, standard deviation) and histograms of three datasets, {u i }, {v i }, and {u i \u2212 v i }, as shown in the first table and the three histograms in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 255,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Histograms and Summary Statistics",
"sec_num": "3.1.2"
},
{
"text": "Many statistical tests (t test, bootstrap test based on t ratios, etc) are based on the mean as the test statistic, drawing inferences on average system performance. However, when the data distribution is not symmetric, the mean does not properly measure the central tendency. In that case, the median is a more robust measure. Another issue associated with mean is that if the distribution is heavy-tailed (e.g., the t and Cauchy distributions), the sample mean oscillates dramatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Central Tendency Measure",
"sec_num": "3.1.3"
},
{
"text": "In order to examine the symmetry of the underlying distribution, NLPStatTest checks the skewness of {u i \u2212 v i } by estimating the sample skewness (\u03b3). Based on the \u03b3 value, we use the following rule of thumb (Bulmer, 1979) to determine whether NLPStatTest would recommend the use of mean or median as the test statistic for statistical significance testing:",
"cite_spans": [
{
"start": 209,
"end": 223,
"text": "(Bulmer, 1979)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Central Tendency Measure",
"sec_num": "3.1.3"
},
{
"text": "\u2022 |\u03b3| \u2208 [0, 0.5): roughly symmetric (use mean)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Central Tendency Measure",
"sec_num": "3.1.3"
},
{
"text": "\u2022 |\u03b3| \u2208 [0.5, 1): slightly skewed (use median)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Central Tendency Measure",
"sec_num": "3.1.3"
},
{
"text": "\u2022 |\u03b3| \u2208 [1, \u221e): highly skewed (use median)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Central Tendency Measure",
"sec_num": "3.1.3"
},
{
"text": "To choose a good significance test for {u i \u2212v i }, we need to determine if the data is normally distributed. If it is, t test is the most appropriate (and powerful) test; if not, then non-parametric tests which do not assume normality might be more appropriate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normality Test",
"sec_num": "3.1.4"
},
{
"text": "If a distribution is skewed according to \u03b3, there is no need to run normality test as the data is not normally distributed. For a non-skewed distribution, NLPStatTest will run the Shapiro-Wilk normality test (Shapiro and Wilk, 1965) , which is itself a test of statistical significance. The user can choose the significance level (\u03b1 1 in Figure 1 ).",
"cite_spans": [
{
"start": 208,
"end": 232,
"text": "(Shapiro and Wilk, 1965)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 338,
"end": 346,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Normality Test",
"sec_num": "3.1.4"
},
{
"text": "Based on the skewness check and normality test result, NLPStatTest will automatically choose a test statistic (mean or median) and recommend a list of appropriate significance tests (e.g., t test if {u i \u2212 v i } is normally distributed).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recommended Significance Tests",
"sec_num": "3.1.5"
},
{
"text": "In this step, the user sets the significance level (\u03b1 2 in Figure 1 ) and chooses a significance test from the ones recommended in the previous step. If the test has any parameter (e.g., the number of trials for bootstrap testing B), NLPStatTest will suggest a default value which can be changed by users. NLPStatTest will then run the test, calculate a p-value (and/or provide a confidence interval), and reject H 0 if p < \u03b1 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 67,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Testing",
"sec_num": "3.2"
},
{
"text": "Effect size can be estimated by different effect size indices, depending on the data types (numerical or categorical) and significance tests. Dror et al. (2020) defined effect size as the unstandardized difference between system performance, while Hauch et al. (2012) and Pimentel et al. (2019) used the standardized difference.",
"cite_spans": [
{
"start": 142,
"end": 160,
"text": "Dror et al. (2020)",
"ref_id": "BIBREF5"
},
{
"start": 248,
"end": 267,
"text": "Hauch et al. (2012)",
"ref_id": "BIBREF7"
},
{
"start": 272,
"end": 294,
"text": "Pimentel et al. (2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "NLPStatTest implements the following four indices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "Once users select one or more, NLPStatTest will calculate effect size accordingly and display the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "Cohen's d estimates the standardized mean difference by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d =\u00fb \u2212v \u03c3",
"eq_num": "(2)"
}
],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "wherev and\u00fb are the sample means and\u03c3 denote standard deviation of u \u2212 v. Cohen's d assumes normality and is one of the most frequently used effect size indices. If Cohen's d, or any other effect size indices depending on\u03c3, is used to estimate effect size, the EU size will affect the standard deviation and thus effect size estimate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "Hedges' g adjusts the bias brought by Cohen's d in small samples by the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "g = d \u2022 1 \u2212 3 4n \u2212 9 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "where n is the size of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "{u i \u2212 v i }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "Wilcoxon r is an effect size index for the Wilcoxon signed rank test, calculated as r = Z \u221a n , where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "Z = W \u2212 n(n + 1)/4 n(n+1)(2n+1) 24 \u2212 t\u2208T t 3 \u2212t 48 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "Here, W is the test statistic for Wilcoxon signed rank test and T is the set of tied ranks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "Hodges-Lehmann Estimator (Hodges and Lehmann, 1963) is an estimator for the median.",
"cite_spans": [
{
"start": 25,
"end": 51,
"text": "(Hodges and Lehmann, 1963)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "Let w i = u i \u2212 v i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "The HL estimator for one-sample testing is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "HL = median {(w i + w j )/2, i = j} (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect Size",
"sec_num": "3.3"
},
{
"text": "Power (Definition 3) covaries with sample size, effect size and the significance level \u03b1. In particular, power increases with larger sample size, effect size, and \u03b1. There are two common types of power analysis, namely prospective and retrospective power analysis, and NLPStatTest implements both types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Power Analysis",
"sec_num": "3.4"
},
{
"text": "Prospective power analysis is used when planning a study (usually in clinical trials) in order to decide how many subjects are needed. In the NLP field, when one constructs or chooses a test corpus for evaluation, it will be beneficial to conduct this type of power analysis to determine how big a corpus needs to be in order to ensure that the significance test reaches the desired power level. In NLPStatTest, prospective power analysis is a preliminary and optional step. The user needs to provide the expected mean and standard deviation of the differences between samples, the desired power level, and the required significance level. NLPStatTest will calculate the minimally required sample size for t test via a closed form, assuming the normal distribution of the data. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prospective Power Analysis",
"sec_num": "3.4.1"
},
{
"text": "Retrospective or post-hoc power analysis is usually done after a significance test to determine the relation between sample size and power. There are two scenarios associated with retrospective power analysis: When the values in {u i \u2212v i } are from a known distribution, one can use Monte Carlo simulation to directly simulate from this known distribution. To do this, one has to have an informed guess of the desired effect size (i.e., mean difference) via meta-analysis of previous studies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrospective Power Analysis",
"sec_num": "3.4.2"
},
{
"text": "When the distribution of the sample is unknown a priori, one can resample with replacement from the empirical distribution of the sample (a.k.a. the bootstrap method (Efron and Tibshirani, 1993) ) to estimate the power.",
"cite_spans": [
{
"start": 166,
"end": 194,
"text": "(Efron and Tibshirani, 1993)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Retrospective Power Analysis",
"sec_num": "3.4.2"
},
{
"text": "NLPStatTest implements both methods. Users can employ one or both; NLPStatTest will produce a figure that shows the relation between sample size and power, as in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 170,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Retrospective Power Analysis",
"sec_num": "3.4.2"
},
{
"text": "The NLPStatTest graphical user interface can be run locally or on the Web. There is also a command line version. The graphical tool, the command line tool, the source code, a user manual, a tutorial video are available at nlpstats.ling. washington.edu. We recommend using an updated Chromium-based browser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4"
},
{
"text": "The client-side web interface is written in HTML, CSS, and JavaScript (with JQuery). The server-side code is written in Python, using the Flask web framework. YAML is used for configuration files. KaTeX is used to render mathematical symbols. The Python code uses the SciPy and NumPy libraries to implement statistical tests and Matplotlib to generate the histograms and graphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4"
},
{
"text": "To test the output validity and speed of NLPStatTest, we run experiments using both real and simulated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "5"
},
{
"text": "The WMT-2017 shared task (Bojar et al., 2017) reported system performance results based on human evaluation scores; unpaired testing (Wilcoxon rank-sum) was used because not many sentences had human evaluation scores for both MT systems that were being compared. Because NLPStatTest currently implements paired testing only, we use the Wilcoxon signedrank test (instead of Wilcoxon rank-sum test) and the BLEU scores (instead of human evaluation scores) when comparing MT systems. According to Bojar et al. (2017) , a set of 15 or more sentencelevel evaluation scores constitutes a reliable measure of translation quality; thus, we set the EU size to be 15. We also reshuffled the scores before grouping test instances into evaluation units. Figure 4 shows the results of pairwise comparisons among all 16 Chinese-to-English MT systems (120 system pairs in total). The heatmap is similar to the comparison results in Bojar et al. (2017) (see Figure 5 in that paper). The minor differences of the two heatmaps are due to different evaluation metrics (BLEU vs. human scores), the significant tests (Wilcoxon signed-rank vs. Wilcoxon ranksum), and the numbers of EUs (more test sentences have BLEU scores than human evaluation scores). ",
"cite_spans": [
{
"start": 25,
"end": 45,
"text": "(Bojar et al., 2017)",
"ref_id": "BIBREF0"
},
{
"start": 494,
"end": 513,
"text": "Bojar et al. (2017)",
"ref_id": "BIBREF0"
},
{
"start": 917,
"end": 936,
"text": "Bojar et al. (2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 742,
"end": 750,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 942,
"end": 950,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Real Data from WMT-2017",
"sec_num": "5.1"
},
{
"text": "We also run simulation experiments on NLPStatTest to validate the testing results. Here, we conduct two-sided, paired testing, varying sample size from 30 to 25,000, each with 20 iterations of tests to obtain a range of p-values. As shown in Figure 5 , when H 0 is true (see Fig 5(a) and 5(c)), p-values range freely in (0, 1). When H 0 is false (see 5(b) and 5(d)), p-values approach zero as sample size increases, as expected. The fast convergence to zero in 5(d) may be due to the small variance of the differences between the two Beta samples (\u2248 0.046), even though the difference between sample medians is small (\u2248 0.02). In contrast, 5(b) converges to zero much more slowly due to the large variance.",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 250,
"text": "Figure 5",
"ref_id": "FIGREF4"
},
{
"start": 275,
"end": 283,
"text": "Fig 5(a)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Simulated Data",
"sec_num": "5.2"
},
{
"text": "Dror et al. 2018 NLPStatTest is based on the frequentist approach to hypothesis testing. Sadeqi Azer et al. (2020) developed a Bayesian system 3 which uses the Bayes factor to determine the posterior probability of H 0 being true or false.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "While statistical significance testing has been commonly used to compare NLP system performance, a small p-value alone is not sufficient because statistical significance is different from practical significance. To measure practical significance, we recommend estimating and reporting of effect size. It is also necessary to conduct power analysis to ensure that the test corpus is large enough to achieve a desirable power level. We propose a three-stage procedure for comparing NLP system performance, and provide a toolkit, NLPStatTest, to automate the testing stage of the procedure. For future work, we will extend this work to hypothesis testing with multiple datasets or multiple metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Here we adopt the frequentist approach to hypothesis testing. The debate over frequentist and Bayesian is beyond the scope of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Findings of the 2017 conference on machine translation (WMT17)",
"authors": [
{
"first": "O",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rubino",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "169--214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O. Bojar, R. Chatterjee, C. Federmann, Y. Graham, B. Haddow, S. Huang, M. Huck, P. Koehn, Q. Liu, V. Logacheva, C. Monz, M. Negri, M. Post, R. Ru- bino, L. Specia, and M. Turchi. 2017. Find- ings of the 2017 conference on machine translation (WMT17). In Proceedings of the Second Confer- ence on Machine Translation, Volume 2: Shared 2 https://github.com/rtmdrr/ testSignificanceNLP 3 https://github.com/allenai/HyBayes Task Papers, pages 169-214, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Principles of Statistics",
"authors": [
{
"first": "M",
"middle": [
"G"
],
"last": "Bulmer",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. G. Bulmer. 1979. Principles of Statistics, page 57. Dover, New York.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Things I have learned (so far). American Psychologist",
"authors": [
{
"first": "J",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "45",
"issue": "",
"pages": "1304--1312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Cohen. 1990. Things I have learned (so far). Ameri- can Psychologist, 45(12):1304 -1312.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The earth is round (p < .05)",
"authors": [
{
"first": "J",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1994,
"venue": "American Psychologist",
"volume": "",
"issue": "",
"pages": "997--1003",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Cohen. 1994. The earth is round (p < .05). American Psychologist, pages 997-1003.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The hitchhiker's guide to testing statistical significance in natural language processing",
"authors": [
{
"first": "R",
"middle": [],
"last": "Dror",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Baumer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Shlomov",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Reichart",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL",
"volume": "1",
"issue": "",
"pages": "1383--1392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Dror, G. Baumer, S. Shlomov, and R. Reichart. 2018. The hitchhiker's guide to testing statistical signifi- cance in natural language processing. In Proceed- ings of ACL-2018 (Volume 1: Long Papers), pages 1383-1392, Melbourne, Australia.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Statistical Significance Testing for Natural Language Processing",
"authors": [
{
"first": "R",
"middle": [],
"last": "Dror",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Peled-Cohen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Shlomov",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Reichart",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Dror, L. Peled-Cohen, S. Shlomov, and R. Reichart. 2020. Statistical Significance Testing for Natural Language Processing. Morgan & Claypool.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An Introduction to the Bootstrap",
"authors": [
{
"first": "B",
"middle": [],
"last": "Efron",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Tibshirani",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Efron and R. J. Tibshirani. 1993. An Introduction to the Bootstrap. Chapman & Hall, New York, NY.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Linguistic cues to deception assessed by computer programs: A meta-analysis",
"authors": [
{
"first": "V",
"middle": [],
"last": "Hauch",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Bland\u00f3n-Gitlin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Masip",
"suffix": ""
},
{
"first": "S",
"middle": [
"L"
],
"last": "Sporer",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Workshop on Computational Approaches to Deception Detection",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Hauch, I. Bland\u00f3n-Gitlin, J. Masip, and S. L. Sporer. 2012. Linguistic cues to deception assessed by com- puter programs: A meta-analysis. In Proceedings of the Workshop on Computational Approaches to De- ception Detection, pages 1-4, Avignon, France.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Estimates of location based on rank tests",
"authors": [
{
"first": "J",
"middle": [
"L"
],
"last": "Hodges",
"suffix": ""
},
{
"first": "E",
"middle": [
"L"
],
"last": "Lehmann",
"suffix": ""
}
],
"year": 1963,
"venue": "Annals of Mathematical Statistics",
"volume": "34",
"issue": "2",
"pages": "598--611",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. L. Hodges and E. L. Lehmann. 1963. Estimates of lo- cation based on rank tests. Annals of Mathematical Statistics, 34(2):598-611.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Too big to fail: Large samples and the p-value problem. Information Systems Research",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Lucas",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Shmueli",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "24",
"issue": "",
"pages": "906--917",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Lin, H. Lucas, and G. Shmueli. 2013. Too big to fail: Large samples and the p-value problem. Infor- mation Systems Research, 24:906-917.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A practical primer to power analysis for simple experimental designs",
"authors": [
{
"first": "M",
"middle": [],
"last": "Perugini",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gallucci",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Costantini",
"suffix": ""
}
],
"year": 2018,
"venue": "International Review of Social Psychology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Perugini, M. Gallucci, and G. Costantini. 2018. A practical primer to power analysis for simple experi- mental designs. International Review of Social Psy- chology, 31.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Meaning to form: Measuring systematicity as information",
"authors": [
{
"first": "T",
"middle": [],
"last": "Pimentel",
"suffix": ""
},
{
"first": "A",
"middle": [
"D"
],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Blasi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Roark",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Cotterell",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of ACL-2019",
"volume": "",
"issue": "",
"pages": "1751--1764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Pimentel, A. D. McCarthy, D. Blasi, B. Roark, and R. Cotterell. 2019. Meaning to form: Measuring sys- tematicity as information. In Proceedings of ACL- 2019, pages 1751-1764, Florence, Italy.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Not all claims are created equal: Choosing the right statistical approach to assess hypotheses",
"authors": [
{
"first": "E",
"middle": [],
"last": "Azer",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Khashabi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2020,
"venue": "Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Sadeqi Azer, D. Khashabi, A. Sabharwal, and D. Roth. 2020. Not all claims are created equal: Choosing the right statistical approach to assess hy- potheses. In Annual Meeting of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An analysis of variance test for normality (complete samples) \u2020",
"authors": [
{
"first": "S",
"middle": [
"S"
],
"last": "Shapiro",
"suffix": ""
},
{
"first": "M",
"middle": [
"B"
],
"last": "Wilk",
"suffix": ""
}
],
"year": 1965,
"venue": "Biometrika",
"volume": "52",
"issue": "3-4",
"pages": "591--611",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. S. Shapiro and M. B. Wilk. 1965. An analysis of variance test for normality (complete samples) \u2020. Biometrika, 52(3-4):591-611.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The need to report effect size estimates revisited. An overview of some recommended measures of effect size",
"authors": [
{
"first": "M",
"middle": [],
"last": "Tomczak",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Tomczak",
"suffix": ""
}
],
"year": 2014,
"venue": "Trends in Sport Sciences",
"volume": "21",
"issue": "",
"pages": "19--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Tomczak and E. Tomczak. 2014. The need to re- port effect size estimates revisited. An overview of some recommended measures of effect size. Trends in Sport Sciences, 21:19-25.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The ASA statement on p-values: Context, process, and purpose",
"authors": [
{
"first": "R",
"middle": [
"L"
],
"last": "Wasserstein",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Lazar",
"suffix": ""
}
],
"year": 2016,
"venue": "The American Statistician",
"volume": "70",
"issue": "2",
"pages": "129--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. L. Wasserstein and N. A. Lazar. 2016. The ASA statement on p-values: Context, process, and pur- pose. The American Statistician, 70(2):129-133.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Figure 1: The three-stage procedure for comparing NLP system performance. The pink flag boxes are the parameters that users can either set or use the default values provided by NLPStatTest. The blue hexagons are system output of NLPStatTest. \u03b1 1 and \u03b1 2 are the significance levels for normality test and statistical significance test respectively. EU stands for evaluation unit."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Screenshot of the data analysis step. The part above the Run button are parameters that users can set, and the part below is NLPStatTest output."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Screenshot for retrospective power analysis."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Heatmap of pairwise comparison for the 16 WMT-2017 Chinese-to-English MT systems. BLEU scores and Wilcoxon signed-rank test are used. pvalues are adjusted via Bonferroni correction. Dark green cells indicate statistical significance (p < 0.05); light green cells indicate non-significance (p \u2265 0.05)."
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Plots of p-value against sample size. Figure (a) and (b) use two samples with normal distribution, while (c) and (d) use Beta distribution. H 0 should be true for (a) and (c) and false for (b) and (d). We run t test for (a) and (b), and Wilcoxon signed-rank test for (c) and (d). The red dotted line stands for the threshold \u03b1 = 0.05. The light purple shade depicts the range of p-values. The solid blue line denotes the mean of p-values for each sample size."
},
"TABREF0": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "made an accompanying package available 2 for hypothesis testing. This package includes functionalities such as testing for normality, t testing, permutation/bootstrap testing, and using McNemar's test for categorical data. NLPStatTest implements all the aforementioned tests except McNemar's test. In addition, NLPStatTest offers data analysis, effect size estimation, power analysis and graphical interface.",
"num": null
}
}
}
}