ACL-OCL / Base_JSON /prefixT /json /tacl /2020.tacl-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:55:49.733302Z"
},
"title": "Phonotactic Complexity and Its Trade-offs",
"authors": [
{
"first": "Tiago",
"middle": [],
"last": "Pimentel",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present methods for calculating a measure of phonotactic complexity-bits per phonemethat permits a straightforward cross-linguistic comparison. When given a word, represented as a sequence of phonemic segments such as symbols in the international phonetic alphabet, and a statistical model trained on a sample of word types from the language, we can approximately measure bits per phoneme using the negative log-probability of that word under the model. This simple measure allows us to compare the entropy across languages, giving insight into how complex a language's phonotactics is. Using a collection of 1016 basic concept words across 106 languages, we demonstrate a very strong negative correlation of \u22120.74 between bits per phoneme and the average length of words.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We present methods for calculating a measure of phonotactic complexity-bits per phonemethat permits a straightforward cross-linguistic comparison. When given a word, represented as a sequence of phonemic segments such as symbols in the international phonetic alphabet, and a statistical model trained on a sample of word types from the language, we can approximately measure bits per phoneme using the negative log-probability of that word under the model. This simple measure allows us to compare the entropy across languages, giving insight into how complex a language's phonotactics is. Using a collection of 1016 basic concept words across 106 languages, we demonstrate a very strong negative correlation of \u22120.74 between bits per phoneme and the average length of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "One prevailing view on system wide phonological complexity is that as one aspect increases in complexity (e.g., size of phonemic inventory), another reduces in complexity (e.g., degree of phonotactic interactions). Underlying this claimthe so-called compensation hypothesis (Martinet, 1955; Moran and Blasi, 2014) -is the conjecture that languages are, generally speaking, of roughly equivalent complexity, that is, no language is overall inherently more complex than another. This conjecture is widely accepted in the literature and dates back at least to the work of Hockett (1958) . Because along any one axis, a language may be more complex than another, this conjecture has a corollary that compensatory relationships between different types of complexity must exist. Such compensation has been hypothesized to be the result of natural processes of historical change, and is sometimes attributed to a potential linguistic universal of equal communicative capacity (Pellegrino et al., 2011; Coup\u00e9 et al., 2019) .",
"cite_spans": [
{
"start": 274,
"end": 290,
"text": "(Martinet, 1955;",
"ref_id": "BIBREF43"
},
{
"start": 291,
"end": 313,
"text": "Moran and Blasi, 2014)",
"ref_id": "BIBREF50"
},
{
"start": 577,
"end": 583,
"text": "(1958)",
"ref_id": null
},
{
"start": 969,
"end": 994,
"text": "(Pellegrino et al., 2011;",
"ref_id": "BIBREF52"
},
{
"start": 995,
"end": 1014,
"text": "Coup\u00e9 et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Methods for making hypotheses about linguistic complexity objectively measurable and testable have long been of interest, though existing measures are typically relatively coarse-see, for example, Moran and Blasi (2014) and \u00a72 below for reviews. Briefly, counting-based measures such as inventory sizes (e.g., numbers of vowels, consonants, syllables) typically play a key role in assessing phonological complexity. Yet, in addition to their categorical nature, such measures generally do not capture longer-distance (e.g., crosssyllabic) phonological dependencies such as vowel harmony. In this paper, we take an informationtheoretic view of phonotactic complexity, and advocate for a measure that permits straightforward cross-linguistic comparison: bits per phoneme. For each language, a statistical language model over words (represented as phonemic sequences) is trained on a sample of types from the language, and then used to calculate the bits per phoneme for new samples, thus providing an upper bound of the actual entropy (Brown et al., 1992) .",
"cite_spans": [
{
"start": 197,
"end": 219,
"text": "Moran and Blasi (2014)",
"ref_id": "BIBREF50"
},
{
"start": 1033,
"end": 1053,
"text": "(Brown et al., 1992)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Characterizing phonemes via information theoretic measures goes back to Cherry et al. (1953) , who discussed the information content of phonemes in isolation, based on the presence or absence of distinctive features, as well as in groups, (e.g., trigrams or possibly syllables). Here we leverage modern recurrent neural language modeling methods to build models over full word forms represented as phoneme strings, thus capturing any dependencies over longer distances (e.g., harmony) in assigning probabilities to phonemes in sequence. By training and evaluating on comparable corpora in each language, consisting of concept-aligned words, we can characterize and compare their phonotactics. Probabilistic characterizations of phonotactics have been used extensively in psycholinguistics (see \u00a72.4), but such methods have generally been used to assess single words within a lexicon (e.g., classifying high versus low probability words during stimulus construction), rather than information-theoretic properties of the lexicon as a whole, which our work explores.",
"cite_spans": [
{
"start": 72,
"end": 92,
"text": "Cherry et al. (1953)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The empirical portion of our paper exploits this information-theoretic take on complexity to examine multiple aspects of phonotactic complexity:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(i) Bits per Phoneme and Word Length: In \u00a75.1, we show a very high negative correlation of \u22120.74 between bits per phoneme and average word length for the same 1016 basic concepts across 106 languages. This correlation is plotted in Figure 1 . In contrast, conventional phonotactic complexity measures (e.g., number of consonants in an inventory) demonstrate poor correlation with word length. Our results are consistent with Pellegrino et al. (2011) , who show a similar correlation in speech. 1 We additionally establish, in \u00a75.2, that the correlation persists when controlling for characteristics of long words (e.g., early versus late positions in the word).",
"cite_spans": [
{
"start": 425,
"end": 449,
"text": "Pellegrino et al. (2011)",
"ref_id": "BIBREF52"
},
{
"start": 494,
"end": 495,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 232,
"end": 240,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(ii) Constraining Language: Despite often being thought of as adding complexity, processes like vowel harmony and finalobstruent devoicing improve the predictabil-ity of subsequent segments by constraining the number of well-formed forms. Thus, they reduce complexity measured in bits per phoneme. We validate our models by systematically removing certain constraints in our corpora in \u00a75.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(iii) Intra-versus Inter-Family Correlation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Additionally, we present results in \u00a75.4 showing that our complexity measure not only correlates with word length in a diverse set of languages, but also intra language families. Standard measures of phonotactic complexity do not show such correlations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(iv) Explicit feature representations: We also find (in \u00a75.5) that methods for including features explicitly in the representation, using methods described in \u00a74.1, yield little benefit except in an extremely low-resource condition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our methods 2 permit a straightforward crosslinguistic comparison of phonotactic complexity, which we use to demonstrate an intriguing tradeoff with word length. Before motivating and presenting our methods, we next review related work on measuring complexity and phonotactic modeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Background: Phonological Complexity",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Linguistic complexity is a nuanced topic. For example, one can judge a particular sentence to be syntactically complex relative to other sentences in the language. However, one can also describe a language as a whole as being complex in one aspect or another (e.g., polysynthetic languages are often deemed morphologically complex). In this paper, we look to characterize phonotactics at the language level. However, we use methods more typically applied to specific sentences in a language, for example in the service of psycholinguistic experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Complexity",
"sec_num": "2.1"
},
{
"text": "In cross-linguistic studies, the term complexity is generally used chiefly in two manners, which Moran and Blasi (2014) follow Miestamo (2006) in calling relative and absolute. Relative complexity metrics are those that capture the difficulty of learning or processing language, which Miestamo (2006) points out may vary depending on the individual (hence, is relative to the individual being considered). For example, vowel harmony, which we will touch upon later in the paper, may make vowels more predictable for a native speaker, hence less difficult to process; for a second language learner, however, vowel harmony may increase difficulty of learning and speaking. Absolute complexity measures, in contrast, assess the number of parts of a linguistic (sub-)system (e.g., number of phonemes or licit syllables).",
"cite_spans": [
{
"start": 97,
"end": 119,
"text": "Moran and Blasi (2014)",
"ref_id": "BIBREF50"
},
{
"start": 127,
"end": 142,
"text": "Miestamo (2006)",
"ref_id": "BIBREF47"
},
{
"start": 285,
"end": 300,
"text": "Miestamo (2006)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Complexity",
"sec_num": "2.1"
},
{
"text": "In the sentence processing literature, surprisal (Hale, 2001; Levy, 2008) is a widely used measure of processing difficulty, defined as the negative log probability of a word given the preceding words. Words that are highly predictable from the preceding context have low surprisal, and those that are not predictable have high surprisal.",
"cite_spans": [
{
"start": 49,
"end": 61,
"text": "(Hale, 2001;",
"ref_id": "BIBREF22"
},
{
"start": 62,
"end": 73,
"text": "Levy, 2008)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Complexity",
"sec_num": "2.1"
},
{
"text": "The phonotactic measure we advocate for in \u00a73 is related to surprisal, though at the phoneme level rather than the word level, and over words rather than sentences. Measures related to phonotactic probability have been used in a range of psycholinguistic studies-see \u00a72.4-though generally to characterize single words within a language (e.g., high versus low probability words) rather than for cross-linguistic comparison as we are here. Returning to the distinction made by Miestamo (2006) , we will remain agnostic in this paper as to which class (relative or absolute) such probabilistic complexity measures fall within, as well as whether the trade-offs that we document are bona fide instances of complexity compensation or are due to something else, for example, related to the communicative capacity as hypothesized by Pellegrino et al. (2011) . We bring up this terminological distinction primarily to situate our use of complexity within the diverse usage in the literature.",
"cite_spans": [
{
"start": 475,
"end": 490,
"text": "Miestamo (2006)",
"ref_id": "BIBREF47"
},
{
"start": 826,
"end": 850,
"text": "Pellegrino et al. (2011)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Complexity",
"sec_num": "2.1"
},
{
"text": "Additionally, however, we will point out that an important motivation for those advocating for the use of absolute over relative measures to characterize linguistic complexity in cross-linguistic studies is a practical one. Miestamo (2006 Miestamo ( , 2008 claims that relative complexity measures are infeasible for broadly cross-linguistic studies because they rely on psycholinguistic data, which is neither common enough nor sufficiently easily comparable across languages to support reliable comparison. In this study, we demonstrate that surprisal and related measures are not subject to the practical obstacles raised by Miestamo, independently of whichever class of complexity they fall into.",
"cite_spans": [
{
"start": 224,
"end": 238,
"text": "Miestamo (2006",
"ref_id": "BIBREF47"
},
{
"start": 239,
"end": 256,
"text": "Miestamo ( , 2008",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Complexity",
"sec_num": "2.1"
},
{
"text": "The complexity of phonemes has long been studied in linguistics, including early work on the topic by Zipf (1935) , who argued that a phoneme's articulatory effort was related to its frequency. Trubetzkoy (1938) introduced the notion of markedness of phonological features, which bears some indirect relation to both frequency and articulatory complexity. Phonological complexity can be formulated in terms of language production (e.g., complexity of planning or articulation) or in terms of language processing (e.g., acoustic confusability or predictability), a distinction often framed around the ideas of articulatory complexity and perceptual salience (see, e.g., Maddieson, 2009) . One recent instantiation of this was the inclusion of both focalization and dispersion to model vowel system typology (Cotterell and Eisner, 2017) .",
"cite_spans": [
{
"start": 102,
"end": 113,
"text": "Zipf (1935)",
"ref_id": "BIBREF70"
},
{
"start": 194,
"end": 211,
"text": "Trubetzkoy (1938)",
"ref_id": "BIBREF68"
},
{
"start": 669,
"end": 685,
"text": "Maddieson, 2009)",
"ref_id": "BIBREF40"
},
{
"start": 806,
"end": 834,
"text": "(Cotterell and Eisner, 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measures of Phonological Complexity",
"sec_num": "2.2"
},
{
"text": "It is also natural to ask questions about the phonological complexity of an entire language in addition to that of individual phonemeswhether articulatory or perceptual, phonemic or phonotactic. Measures of such complexity that allow for cross-linguistic comparison are nontrivial to define. We review several previously proposed metrics here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measures of Phonological Complexity",
"sec_num": "2.2"
},
{
"text": "Size of Phoneme Inventory. The most basic metric proposed for measuring phonological complexity is the number of distinct phonemes in the language's phonemic inventory (Nettle, 1995) . There has been considerable historical interest in counting both the number of vowels and the number of consonants (see, e.g., Hockett, 1955; Greenberg et al., 1978; Maddieson and Disner, 1984) . Phoneme inventory size has its limitations-it ignores the phonotactics of the language. It does, however, have the advantage that it is relatively easy to compute without further linguistic analysis. Correlations between the size of vowel and consonant inventories (measured in number of phonemes) have been extensively studied, with contradictory results presented in the literature-see, for example, Moran and Blasi (2014) for a review. Increases in phonemic inventory size are also hypothesized to negatively correlate with word length measured in phonemes (Moran and Blasi, 2014) . In Nettle (1995) , an inverse relationship was demonstrated between the size of the segmental inventory and the mean word length for 10 languages, and similar results (with some qualifications) were found for a much larger collection of languages in Moran and Blasi (2014) . 3 We will explore phoneme inventory size as a baseline in our studies in \u00a75.",
"cite_spans": [
{
"start": 168,
"end": 182,
"text": "(Nettle, 1995)",
"ref_id": "BIBREF51"
},
{
"start": 312,
"end": 326,
"text": "Hockett, 1955;",
"ref_id": "BIBREF28"
},
{
"start": 327,
"end": 350,
"text": "Greenberg et al., 1978;",
"ref_id": null
},
{
"start": 351,
"end": 378,
"text": "Maddieson and Disner, 1984)",
"ref_id": "BIBREF41"
},
{
"start": 941,
"end": 964,
"text": "(Moran and Blasi, 2014)",
"ref_id": "BIBREF50"
},
{
"start": 970,
"end": 983,
"text": "Nettle (1995)",
"ref_id": "BIBREF51"
},
{
"start": 1217,
"end": 1239,
"text": "Moran and Blasi (2014)",
"ref_id": "BIBREF50"
},
{
"start": 1242,
"end": 1243,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measures of Phonological Complexity",
"sec_num": "2.2"
},
{
"text": "Markedness in Phoneme Inventory. A refinement of phoneme inventory size takes into account markedness of the individual phonemes. McWhorter (2001) argues that one should judge the complexity of an inventory by counting the cross-linguistic frequency of the phonemes in the inventory, channeling the spirit of Greenberg (1966) . Thus, a language that has fewer phonemes, but contains cross-linguistically marked ones such as clicks, could be more complex. 4 McWhorter justifies this definition with the observation that no attested language has a phonemic inventory that consists only of marked segments. Beyond frequency, Lindblom and Maddieson (1988) propose a tripartite markedness rating scheme for various consonants. In this paper, we are principally looking at phonotactic complexity, though we did examine the joint training of models across languages, which can be seen as modeling some degree of typicality and markedness.",
"cite_spans": [
{
"start": 130,
"end": 146,
"text": "McWhorter (2001)",
"ref_id": "BIBREF44"
},
{
"start": 309,
"end": 325,
"text": "Greenberg (1966)",
"ref_id": "BIBREF20"
},
{
"start": 455,
"end": 456,
"text": "4",
"ref_id": null
},
{
"start": 622,
"end": 651,
"text": "Lindblom and Maddieson (1988)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measures of Phonological Complexity",
"sec_num": "2.2"
},
{
"text": "Word Length. As stated earlier, word length, measured in the number of phonemes in a word, has been shown to negatively correlate with other complexity measures, such as phoneme inventory size (Nettle, 1995; Moran and Blasi, 2014) . To the extent that this is interpreted as being a compensatory relation, this would indicate that word length is being taken as an implicit measure of complexity. Alternatively, word length has a natural interpretation in terms of information rate, 3 Note that by examining negative correlations between word length and inventory size within the context of complexity compensation, word length is also being taken implicitly as a complexity measure, as we shortly make explicit.",
"cite_spans": [
{
"start": 193,
"end": 207,
"text": "(Nettle, 1995;",
"ref_id": "BIBREF51"
},
{
"start": 208,
"end": 230,
"text": "Moran and Blasi, 2014)",
"ref_id": "BIBREF50"
},
{
"start": 482,
"end": 483,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measures of Phonological Complexity",
"sec_num": "2.2"
},
{
"text": "4 McWhorter (2001) was one of the first to offer a quantitative treatment of linguistic complexity at all levels. Note, however, he rejects the equal complexity hypothesis, arguing that creoles are simpler than other languages. As our data contain no creole languages, we cannot address this hypothesis; rather, we only compare non-creole languages. so trade-offs could be attributed to communicative capacity (Pellegrino et al., 2011; Coup\u00e9 et al., 2019) .",
"cite_spans": [
{
"start": 2,
"end": 18,
"text": "McWhorter (2001)",
"ref_id": "BIBREF44"
},
{
"start": 410,
"end": 435,
"text": "(Pellegrino et al., 2011;",
"ref_id": "BIBREF52"
},
{
"start": 436,
"end": 455,
"text": "Coup\u00e9 et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measures of Phonological Complexity",
"sec_num": "2.2"
},
{
"text": "Number of Licit Syllables. Phonological constraints extend beyond individual units to the structure of entire words themselves, as we discussed above; so why stop at counting phonemes? One step in that direction is to investigate the syllabic structure of language, and count the number of possible licit syllables in the language (Maddieson and Disner, 1984; Shosted, 2006) . Syllabic complexity brings us closer to a more holistic measure of phonological complexity. Take, for instance, the case of Mandarin Chinese. At first blush, one may assume that Mandarin has a complex phonology due to an above-averagesized phonemic inventory (including tones); closer inspection, however, reveals a more constrained system. Mandarin only admits two codas: /n/ and /N/.",
"cite_spans": [
{
"start": 331,
"end": 359,
"text": "(Maddieson and Disner, 1984;",
"ref_id": "BIBREF41"
},
{
"start": 360,
"end": 374,
"text": "Shosted, 2006)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measures of Phonological Complexity",
"sec_num": "2.2"
},
{
"text": "Although syllable inventories and syllablebased measures of phonotactic complexityfor example, highest complexity syllable type in Maddieson (2006) -do incorporate more of the constraints at play in a language versus segmentbased measures, (a) they remain relatively simple counting measures; and (b) phonological constraints do not end at the syllable boundary. Phenomena such as vowel harmony operate at the word level. Further, the combinatorial possibilities captured by a syllabic inventory, as discussed by Maddieson (2009) , can be seen as a sort of categorical version of a distribution over forms. Stochastic models of word-level phonotactics permit us to go beyond simple enumeration of a set, and characterize the distribution in more robust information-theoretic terms.",
"cite_spans": [
{
"start": 131,
"end": 147,
"text": "Maddieson (2006)",
"ref_id": "BIBREF39"
},
{
"start": 513,
"end": 529,
"text": "Maddieson (2009)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measures of Phonological Complexity",
"sec_num": "2.2"
},
{
"text": "Beyond characterizing the complexity of phonemes in isolation or the number of syllables, one can also look at the system determining how phonemes combine to form longer sequences in order to create words. The study of which sequences of phonemes constitute natural-sounding words is called phonotactics. For example, as Chomsky and Halle (1965) point out in their oftcited example, brick is an actual word in English; 5 blick is not an actual word in English, but is judged to be a possible word by English speakers; and bnick is neither an actual nor a possible word in English, due to constraints on its phonotactics.",
"cite_spans": [
{
"start": 321,
"end": 345,
"text": "Chomsky and Halle (1965)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonotactics",
"sec_num": "2.3"
},
{
"text": "Psycholinguistic studies often use phonotactic probability to characterize stimuli within a language-see \u00a72.4 for details. For example, Goldrick and Larson (2008) demonstrate that both articulatory complexity and phonotactic probability influence the speed and accuracy of speech production. Measures of the overall complexity of a phonological system must thus also account for phonotactics.",
"cite_spans": [
{
"start": 136,
"end": 162,
"text": "Goldrick and Larson (2008)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonotactics",
"sec_num": "2.3"
},
{
"text": "Cherry et al. 1953took an explicitly informationtheoretic view of phonemic structure, including discussions of both encoding phonemes as feature bundles and the redundancy within groups of phonemes in sequence. This perspective of phonemic coding has led to work on characterizing the explicit rules or constraints that lead to redundancy in phoneme sequences, including morpheme structure rules (Halle, 1959) or conditions (Stanley, 1967) . Recently, Futrell et al. (2017) took such approaches as inspiration for a generative model over feature dependency graphs. We, too, examine decomposition of phonemes into features for representation in our model (see \u00a74.1), though in general this only provided modeling improvements over atomic phoneme symbols in a low-resource scenario.",
"cite_spans": [
{
"start": 396,
"end": 409,
"text": "(Halle, 1959)",
"ref_id": "BIBREF24"
},
{
"start": 424,
"end": 439,
"text": "(Stanley, 1967)",
"ref_id": "BIBREF60"
},
{
"start": 452,
"end": 473,
"text": "Futrell et al. (2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonotactics",
"sec_num": "2.3"
},
{
"text": "Much of the work in phonotactic modeling is intended to explain the sorts of grammaticality judgments exemplified by the examples of Chomsky and Halle (1965) ",
"cite_spans": [
{
"start": 133,
"end": 157,
"text": "Chomsky and Halle (1965)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonotactics",
"sec_num": "2.3"
},
{
"text": "discussed earlier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonotactics",
"sec_num": "2.3"
},
{
"text": "Recent work is typically founded on the commonly held perspective that such judgements are gradient, 6 hence amenable to stochastic modeling (e.g., Hayes and Wilson, 2008; Futrell et al., 2017 -though cf. Gorman, 2013 . In this paper, however, we are looking at phonotactic modeling as the means for assessing phonotactic complexity and discovering potential evidence of trade-offs cross-linguistically, and are not strictly speaking evaluating the model on its ability to capture such judgments, gradient or otherwise.",
"cite_spans": [
{
"start": 148,
"end": 171,
"text": "Hayes and Wilson, 2008;",
"ref_id": "BIBREF26"
},
{
"start": 172,
"end": 192,
"text": "Futrell et al., 2017",
"ref_id": "BIBREF15"
},
{
"start": 193,
"end": 217,
"text": "-though cf. Gorman, 2013",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonotactics",
"sec_num": "2.3"
},
{
"text": "A word's phonotactic probability has been shown to influence both processing and learning of 6 Gradient judgments would account for the fact that bwick is typically judged to be a possible English word like blick but not as good. In other words, bwick is better than bnick but not as good as blick.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonotactic Probability and Surprisal",
"sec_num": "2.4"
},
{
"text": "language. Words with high phonotactic probabilities (see brief notes on the operationalization of this below) have been shown to speed speech processing, both recognition (e.g., Vitevitch and Luce, 1999) and production (e.g., Goldrick and Larson, 2008) . Phonotactically probable words in a language have also been shown to be easier to learn (Storkel, 2001 (Storkel, , 2003 Coady and Aslin, 2004, inter alia) ; although such an effect is also influenced by neighborhood density (Coady and Aslin, 2003) , as are the speech processing effects (Vitevitch and Luce, 1999) . Informally, phonological neighborhood density is the number of similar sounding words in the lexicon, which, to the extent that high phonotactic probability implies phonotactic patterns frequent in the lexicon, typically correlates to some degree with phonotactic probability-that is, dense neighborhoods will typically consist of phonotactically probable words. Some effort has been made to disentangle the effect of these two characteristics (Vitevitch and Luce, 1999; Storkel et al., 2006; Storkel and Lee, 2011, inter alia).",
"cite_spans": [
{
"start": 178,
"end": 203,
"text": "Vitevitch and Luce, 1999)",
"ref_id": "BIBREF69"
},
{
"start": 226,
"end": 252,
"text": "Goldrick and Larson, 2008)",
"ref_id": "BIBREF16"
},
{
"start": 343,
"end": 357,
"text": "(Storkel, 2001",
"ref_id": "BIBREF61"
},
{
"start": 358,
"end": 374,
"text": "(Storkel, , 2003",
"ref_id": "BIBREF62"
},
{
"start": 375,
"end": 409,
"text": "Coady and Aslin, 2004, inter alia)",
"ref_id": null
},
{
"start": 479,
"end": 502,
"text": "(Coady and Aslin, 2003)",
"ref_id": "BIBREF5"
},
{
"start": 542,
"end": 568,
"text": "(Vitevitch and Luce, 1999)",
"ref_id": "BIBREF69"
},
{
"start": 1015,
"end": 1041,
"text": "(Vitevitch and Luce, 1999;",
"ref_id": "BIBREF69"
},
{
"start": 1042,
"end": 1063,
"text": "Storkel et al., 2006;",
"ref_id": "BIBREF63"
},
{
"start": 1064,
"end": 1064,
"text": "",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonotactic Probability and Surprisal",
"sec_num": "2.4"
},
{
"text": "Within the psycholinguistics literature referenced above, phonotactic probability was typically operationalized by summing or averaging the frequency with which single phonemes and phoneme bigrams occur, either overall or in certain word positions (initial, medial, final); and neighborhood density of a word is typically the number of words in a lexicon that have Levenshtein distance 1 from the word (see, e.g., Storkel and Hoover, 2010). Note that these measures are used to characterize specific words, that is, given a lexicon, these measures allow for the designation of high versus low phonotactic probability words and high versus low neighborhood density words, which is useful for designing experimental stimuli. Our bits per phoneme measure, in contrast, is used to characterize the distribution over a sample of a language rather than specific individual words in that language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phonotactic Probability and Surprisal",
"sec_num": "2.4"
},
{
"text": "Other work has made use of phonotactic probability to examine how such processing and learning considerations may impact the lexicon. Dautriche et al. (2017) take phonotactic probability as one component of ease of processing and learning-the other being perceptual confusability-that might influence how lexicons become organized over time. They operationalize phonotactic probability via generative phonotactic models (phoneme n-gram models and probabilistic context-free grammars with syllable structure), hence closer to the approaches described in this paper than the work cited earlier in this section. Generating artificial lexicons from such models, they find that real lexicons demonstrate higher network density (as indicated by Levenshtein distances, frequency of minimal pairs, and other measures) than the randomly generated lexicons, suggesting that the pressure towards highly clustered lexicons is driven by more than just phonotactic probability.",
"cite_spans": [
{
"start": 134,
"end": 157,
"text": "Dautriche et al. (2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonotactic Probability and Surprisal",
"sec_num": "2.4"
},
{
"text": "Evidence of pressure towards communication efficiency in the lexicon has focused on both phonotactic probability and word length. The information content, as measured by the probability of a word in context, is shown to correlate with orthographic length (taken as a proxy for phonological word length) (Piantadosi et al., 2009 (Piantadosi et al., , 2011 . Piantadosi et al. 2012show that words with lower bits per phoneme have higher rates of homophony and polysemy, in support of their hypothesis that words that are easier to process will have higher levels of ambiguity. Relatedly, Mahowald et al. (2018) demonstrate, in nearly all of the 96 languages investigated, a high correlation between orthographic probability (as proxy for phonotactic probability) and frequency, that is, frequently used forms tend to be phonotactically highly probable, at least within the word lengths examined (3-7 symbols). A similar perspective on the role of predictability in phonology holds that words that are high probability in context (i.e., low surprisal) tend to be reduced, and those that are low probabilty in context are prone to change (Hume and Mailhot, 2013) or to some kind of enhancement (Hall et al., 2018) . As Priva and Jaeger (2018) point out, frequency, predictabilty and information content (what they call informativity and operationalize as expected predictability) are related and easily confounded, hence the perspectives presented by these papers are closely related. Again, for these studies and those cited earlier, such measures are used to characterize individual words within a language rather than the lexicon as a whole.",
"cite_spans": [
{
"start": 303,
"end": 327,
"text": "(Piantadosi et al., 2009",
"ref_id": "BIBREF55"
},
{
"start": 328,
"end": 354,
"text": "(Piantadosi et al., , 2011",
"ref_id": "BIBREF53"
},
{
"start": 586,
"end": 608,
"text": "Mahowald et al. (2018)",
"ref_id": "BIBREF42"
},
{
"start": 1134,
"end": 1158,
"text": "(Hume and Mailhot, 2013)",
"ref_id": "BIBREF30"
},
{
"start": 1190,
"end": 1209,
"text": "(Hall et al., 2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonotactic Probability and Surprisal",
"sec_num": "2.4"
},
{
"text": "In this work, we are interested in a hypothetical phonotactic distribution p lex : \u03a3 * \u2192 R + over the lexicon. In the context of phonology, we interpret \u03a3 * as all ''universally possible phonological surface forms,'' following Hayes and Wilson (2008) . 7 The distribution p lex , then, assigns a probability to every possible surface form x \u2208 \u03a3 * . In the special case that p lex is a log-linear model, then we arrive at what is known as a maximum entropy grammar (Goldwater and Johnson, 2003; J\u00e4ger, 2007) . A good distribution p lex should assign high probability to phonotactically valid words, including non-existent ones, but little probability to phonotactic impossibilities. For instance, the possible English word blick should receive much higher probability than * bnick, which is not a possible English word. The lexicon of a language, then, is considered to be generated as samples without replacement from p lex .",
"cite_spans": [
{
"start": 227,
"end": 250,
"text": "Hayes and Wilson (2008)",
"ref_id": "BIBREF26"
},
{
"start": 253,
"end": 254,
"text": "7",
"ref_id": null
},
{
"start": 464,
"end": 493,
"text": "(Goldwater and Johnson, 2003;",
"ref_id": "BIBREF17"
},
{
"start": 494,
"end": 506,
"text": "J\u00e4ger, 2007)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Probabilistic Lexicon",
"sec_num": "3"
},
{
"text": "If we accept the existence of the distribution p lex , then a natural manner by which we should measure the phonological complexity of language is through Shannon's entropy (Cover and Thomas, 2012)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Probabilistic Lexicon",
"sec_num": "3"
},
{
"text": "H(p lex ) = \u2212 x\u2208\u03a3 * p lex (x) log p lex (x) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Probabilistic Lexicon",
"sec_num": "3"
},
{
"text": "The units of H(p lex ) are bits as we take log to be base 2. Specifically, we will be interested in bits per phoneme, that is, how much information each phoneme in a word conveys, on average.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Probabilistic Lexicon",
"sec_num": "3"
},
{
"text": "Here we seek to make a linguistic argument for the adoption of bits per phoneme as a metric for complexity in the phonological literature. Bits are fundamentally units of predictability: If the entropy of your distribution is higher (i.e., more bits), then it is less predictable, and if the entropy is lower, (i.e., fewer bits), then it is more predictable with an entropy of 0 indicating determinism.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Rationale",
"sec_num": "3.1"
},
{
"text": "Holistic Treatment. When we just count the number of distinctions in individual parts of the phonology, for example, number of vowels or number of consonants, we do not get a holistic picture of how these pieces interact. A simple probabilistic treatment will inherently capture nuanced interactions. Indeed, it is not clear how to balance the number of consonants, the number of vowels and the number of tones to get a single number of phonological complexity. Probabilistically modeling phonological strings, however, does capture this. We judge the complexity of a phonological system as its entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Rationale",
"sec_num": "3.1"
},
{
"text": "Longer-Distance Dependencies. To the best of our knowledge, the largest phonological unit that has been considered in the context of crosslinguistic phonological complexity is the syllable, as discussed in \u00a72.2. However, the syllable clearly has limitations. It cannot capture, tautologically, cross-syllabic phonological processes, which abound in the languages of the world. For instance, vowel and consonant harmony are quite common crosslinguistically. Naturally, a desideratum for any measure of phonological complexity is to consider all levels of phonological processes. Examples of vowel harmony in Turkish are presented in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 632,
"end": 639,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Linguistic Rationale",
"sec_num": "3.1"
},
{
"text": "Frequency Information. None of the previously proposed phonological complexity measures deals with the fact that certain patterns are more frequent than others; probability models inherently handle this as well. Indeed, consider the role of the unvoiced velar fricative /x/ in English; while not part of the canonical consonant inventory, /x/ nevertheless appears in a variety of loanwords. For instance, many native English speakers do pronounce the last name of composer Johann Sebastian Bach as /bax/. Moreover, English phonology acts upon /x/ as one would expect: Consider Morris Halle's (1978) example Sandra out-Bached Bach, where the second word is pronounced /out-baxt/ with a final /t/ rather than a /d/. We conclude that /x/ is in the consonant inventory of at least some native English speakers. However, counting it on equal status with the far more common /k/ when determining complexity seems incorrect. Our probabilistic metric covers this corner case elegantly.",
"cite_spans": [
{
"start": 584,
"end": 598,
"text": "Halle's (1978)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Rationale",
"sec_num": "3.1"
},
{
"text": "Relatively Modest Annotation Requirements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Rationale",
"sec_num": "3.1"
},
{
"text": "Many of these metrics require a linguist's analysis of the language. This is a tall order for many languages. Our probabilistic approach only requires relatively simple annotations, namely, a Swadesh (1955) -style list in the international phonetic alphabet (IPA) to estimate a distribution. When discussing why he limits himself to counting complexities, Maddieson (2009) writes:",
"cite_spans": [
{
"start": 192,
"end": 206,
"text": "Swadesh (1955)",
"ref_id": "BIBREF67"
},
{
"start": 356,
"end": 372,
"text": "Maddieson (2009)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Rationale",
"sec_num": "3.1"
},
{
"text": "[t]he factors considered in these studies only involved the inventories of consonant and vowel contrasts, the tonal system, if any, and the elaboration of the syllable canon. It is relatively easy to find answers for a good many languages to such questions as 'how many consonants does this language distinguish?' or 'how many types of syllable structures does this language allow?'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Rationale",
"sec_num": "3.1"
},
{
"text": "The moment one searches for data on more elaborate notions of complexity, for example, the existence of vowel harmony, one is faced with the paucity of data-a linguist must have analyzed the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Rationale",
"sec_num": "3.1"
},
{
"text": "Many phonologies in the world use hard constraints (e.g., a syllable final obstruent must be devoiced or the vowels in a word must be harmonic). Using our definition of phonological complexity as entropy, we can prove a general result that any hard-constraining process will reduce entropy, thus making the phonology less complex. The fact that this holds for any hard contraint, be it vowel harmony or final-obstruent devoicing, is a fact that conditioning reduces entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints Reduce Entropy",
"sec_num": "3.2"
},
{
"text": "If we want to compute Equation (1), we are immediately faced with two problems. First, we do not know p lex : we simply assume the existence of such a distribution from which the words of the lexicon were drawn. Second, even if we did know p lex , computation of the H(p lex ) would be woefully intractable, as it involves an infinite sum. Following Brown et al. (1992), we tackle both of these issues together. Note that this line of reasoning follows and Mielke et al. (2019) , who use a similar technique for measuring language complexity at the sentence level.",
"cite_spans": [
{
"start": 457,
"end": 477,
"text": "Mielke et al. (2019)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Variational Upper Bound",
"sec_num": "3.3"
},
{
"text": "We start with a basic inequality from information theory. For any distribution q lex with the same support as p lex , the cross-entropy provides an upper bound on the entropy, that is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Variational Upper Bound",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H(p lex ) \u2264 H(p lex , q lex )",
"eq_num": "(2)"
}
],
"section": "A Variational Upper Bound",
"sec_num": "3.3"
},
{
"text": "where cross-entropy is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Variational Upper Bound",
"sec_num": "3.3"
},
{
"text": "H(p lex , q lex ) = \u2212 x\u2208\u03a3 * p lex (x) log q lex (x) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Variational Upper Bound",
"sec_num": "3.3"
},
{
"text": "Note that Equation 2is tight if and only if p lex = q lex . We still are not done, as Equation 3still requires knowledge of p lex and involves an infinite sum. However, we are now in a position to exploit samples from p lex . Specifically, give\u00f1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Variational Upper Bound",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x (i) \u223c p lex , we approximate H(p lex , q lex ) \u2248 \u2212 1 N N i=1 log q lex (x (i) )",
"eq_num": "(4)"
}
],
"section": "A Variational Upper Bound",
"sec_num": "3.3"
},
{
"text": "with equality if we let N \u2192 \u221e. In information theory, this equality in the limit is called the asymptotic equipartition property and follows easily from the weak law of large numbers. Now, we have an empirical procedure for estimating an upper bound on H(p lex ). For the rest of the paper, we will use the right-hand side of Equation 4as a surrogate for the phonotactic complexity of a language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Variational Upper Bound",
"sec_num": "3.3"
},
{
"text": "How to Choose q lex ? Choosing a good q lex is a two-step process. First, we choose a variational family Q. Then, we choose a specific q lex \u2208 Q by minimizing the right-hand side of Equation (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Variational Upper Bound",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "q lex = argsup q\u2208Q 1 N N i=1 log q(x (i) )",
"eq_num": "(5)"
}
],
"section": "A Variational Upper Bound",
"sec_num": "3.3"
},
{
"text": "This procedure corresponds to maximum likelihood estimation. In this work, we consider two variational families: (i) a phoneme n-gram model, and (ii) a phoneme-level RNN language model. We describe each in \u00a74.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Variational Upper Bound",
"sec_num": "3.3"
},
{
"text": "To make the implicit explicit, in this work we will exclusively be modeling types, rather than tokens. We briefly justify this discussion from both theoretical and practical concerns. From a theoretical side, a token-based model is unlikely to correctly model an out of vocabulary distribution as very frequent tokens often display unusual phonotactics for historical reasons. A classic example comes from English: Consider the appearance of /D/. Judging by token-frequency, /D/ is quite common as it starts some of the most common words in the language: the, they, that, and so forth. However, novel words categorically avoid initial /D/. From a statistical point of view, one manner to justify type-level modeling is through the Pitman-Yor process (Ishwaran and James, 2003) . Goldwater et al. (2006) showed that type-level modeling is a special case of the stochastic process, writing that they ''justif[y] the appearance of type frequencies in formal analyses of natural language.'' Practically, using token-level frequencies, even in a dampened form, is not possible due to the large selection of languages we model. Most of the languages we consider do not have corpora large enough to get reasonable token estimates. Moreover, as many of the languages we consider have a small number of native speakers, and, in extreme cases, are endangered, the situation is unlikely to remedy itself, forcing the phonotactician to rely on types.",
"cite_spans": [
{
"start": 750,
"end": 776,
"text": "(Ishwaran and James, 2003)",
"ref_id": "BIBREF31"
},
{
"start": 779,
"end": 802,
"text": "Goldwater et al. (2006)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Note on Types and Tokens",
"sec_num": "3.4"
},
{
"text": "Notation. Let \u03a3 be a discrete alphabet of symbols from the IPA, including special beginningof-string and end-of-string symbols. A character level language model (LM) models a probability distribution over",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-Level Language Models",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03a3 * p(x) = |x| i=1 p (x i | x <i )",
"eq_num": "(6)"
}
],
"section": "Phoneme-Level Language Models",
"sec_num": "4.1"
},
{
"text": "Trigram LM. n-grams assume the sequence follows a (n \u2212 1)-order Markov model, conditioning the probability of a phoneme on the (n \u2212 1) previous ones",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-Level Language Models",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f n (x i | x <i ) = count(x i , x i\u22121 , . . . , x i+1\u2212n ) count(x i\u22121 , . . . , x i+1\u2212n )",
"eq_num": "(7)"
}
],
"section": "Phoneme-Level Language Models",
"sec_num": "4.1"
},
{
"text": "where we assume the string x is properly padded with beginning and end-of-string symbols.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-Level Language Models",
"sec_num": "4.1"
},
{
"text": "The trigram model used in this work is estimated as the deleted interpolation (Jelinek, 1980) of the trigram, bigram, and unigram relative frequency estimates",
"cite_spans": [
{
"start": 78,
"end": 93,
"text": "(Jelinek, 1980)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-Level Language Models",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p 3 (x i | x <i ) = 3 n=1 \u03b1 n f n (x i | x <i )",
"eq_num": "(8)"
}
],
"section": "Phoneme-Level Language Models",
"sec_num": "4.1"
},
{
"text": "where the mixture parameters \u03b1 n were estimated via Bayesian optimization with a Gaussian prior maximizing the expected improvement on a validation set, as discussed by Snoek et al. (2012) .",
"cite_spans": [
{
"start": 169,
"end": 188,
"text": "Snoek et al. (2012)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-Level Language Models",
"sec_num": "4.1"
},
{
"text": "Recurrent Neural LM. Recurrent neural networks excel in language modeling, being able to capture complex distributions p(x i | x <i ) (Mikolov et al., 2010; Sundermeyer et al., 2012) . Empirically, recent work has observed dependencies on up to around 200 tokens (Khandelwal et al., 2018) . We use a characterlevel Long Short-Term Memory (LSTM, Hochreiter and Schmidhuber, 1997) language model, which is the state of the art for character-level language modeling (Merity et al., 2018) . Our architecture receives a sequence of tokens x \u2208 \u03a3 * and embeds each token x i \u2208 \u03a3 using a dictionary-lookup embedding table. This results in vectors z i \u2208 R d which are fed into an LSTM. This LSTM produces a high-dimensional representation of the sequence, often termed hidden states",
"cite_spans": [
{
"start": 134,
"end": 156,
"text": "(Mikolov et al., 2010;",
"ref_id": "BIBREF49"
},
{
"start": 157,
"end": 182,
"text": "Sundermeyer et al., 2012)",
"ref_id": "BIBREF66"
},
{
"start": 263,
"end": 288,
"text": "(Khandelwal et al., 2018)",
"ref_id": "BIBREF35"
},
{
"start": 345,
"end": 378,
"text": "Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF27"
},
{
"start": 463,
"end": 484,
"text": "(Merity et al., 2018)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-Level Language Models",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h i = LST M (z i\u22121 , h i\u22121 ) \u2208 R d",
"eq_num": "(9)"
}
],
"section": "Phoneme-Level Language Models",
"sec_num": "4.1"
},
{
"text": "These representations are then fed into a softmax to produce a distribution over the next character",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-Level Language Models",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p (x i | x <i ) = softmax (W h i + b)",
"eq_num": "(10)"
}
],
"section": "Phoneme-Level Language Models",
"sec_num": "4.1"
},
{
"text": "where W \u2208 R |\u03a3|\u00d7d is a final projection matrix and b \u2208 R |\u03a3| is a bias term. In our implementation, h 0 is a vector of all zeros and z 0 is the lookup embedding for the beginning-of-string token. corresponding embedding representation z (k) . A phoneme embedding will, then, be composed by the element-wise average of each of its features lookup embedding",
"cite_spans": [
{
"start": 237,
"end": 240,
"text": "(k)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-Level Language Models",
"sec_num": "4.1"
},
{
"text": "z i = k a (k) i z (k) k a (k) i (11) where a (j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-Level Language Models",
"sec_num": "4.1"
},
{
"text": "i is 1 if phoneme i presents attribute j and z (j) is the lookup embedding of attribute j. This architecture forces similar phonemes, measured in terms of overlap in distinctive features, to have similar representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phoneme-Level Language Models",
"sec_num": "4.1"
},
{
"text": "We make use of data from the NorthEuraLex corpus (Dellert and J\u00e4ger, 2017) . The corpus is a concept-aligned multi-lingual lexicon with data from 107 languages. The lexicons contains 1016 ''basic'' concepts. Importantly, NorthEuraLex is appealing for our study as all the words are written in a unified IPA scheme. A sample of the lexicon is provided in Table 2 . For the results reported in this paper, we omitted Mandarin, because no tone information was included in its annotations, causing its phonotactics to be greatly underspecified. No other tonal languages were included in the corpus, so all reported results are over 106 languages.",
"cite_spans": [
{
"start": 49,
"end": 74,
"text": "(Dellert and J\u00e4ger, 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 354,
"end": 361,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "NorthEuraLex Data",
"sec_num": "4.2"
},
{
"text": "Why Is Base-Concept Aligned Important? Making use of data that are concept-aligned across the languages provides a certain amount of control (to the extent possible) of the influence of linguistic content on the forms that we are modeling. In other words, these forms should be largely comparable across the languages in terms of how common they are in the active vocabulary of adult speakers. Further, base concepts as defined for the collection are more likely to be lemmas without inflection, thus reducing the influence of morphological processes on the results. 8 To test this latter assertion, we made use of the UniMorph 9 morphological database (Kirov et al., 2018) to look up words and assess the percentage that correspond to lemmas or base forms. Of the 106 languages in our collection, 48 are also in the UniMorph database, and 46 annotate their lemmas in a way that allowed for simple string matching with our word forms. For these 46 languages, on average we found 313 words in UniMorph of the 1016 concepts (median 328). A mean of 87.2% (median 93.3%; minimum 58.6%) of these matched lemmas for that language in the UniMorph database. This rough string matching approach provides some indication that the items in the corpus are largely composed of such base forms.",
"cite_spans": [
{
"start": 567,
"end": 568,
"text": "8",
"ref_id": null
},
{
"start": 653,
"end": 673,
"text": "(Kirov et al., 2018)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NorthEuraLex Data",
"sec_num": "4.2"
},
{
"text": "Dataset Limitations. Unfortunately, there is less typological diversity in our dataset than we would ordinarily desire. NorthEuraLex draws its languages from 21 distinct language families that are spoken in Europe and Asia. This excludes languages indigenous to the Americas, 10 Australia, Africa, and South-East Asia. Although lamentable, we know of no other concept-aligned lexicon with broader typological diversity that is written in a unified phonetic alphabet, so we must save studies of more typologically diverse sets of languages for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NorthEuraLex Data",
"sec_num": "4.2"
},
{
"text": "In addition, we note that the process of base concept selection and identification of corresponding forms from each language (detailed in Dellert, 2015 Dellert, , 2017 was non-trivial, and some of the corpus design decisions may have resulted in somewhat biased samples in some languages. For example, there was an attempt to minimize the frequency of loanwords in the dataset, which may make the lexicons in loanword heavy languages, such as English with its extensive Latinate vocabulary, somewhat less representative of everyday use than in other languages. Similarly, the creation of a common IPA representation over this number of languages required choices that 8 Most of the concepts in the dataset do not contain function words and verbs are in the bare infinitive form -(e.g., have, instead of to have) although there are a few exceptions. For example, the German word hundert is represented as a hundred in English.",
"cite_spans": [
{
"start": 138,
"end": 151,
"text": "Dellert, 2015",
"ref_id": "BIBREF12"
},
{
"start": 152,
"end": 167,
"text": "Dellert, , 2017",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NorthEuraLex Data",
"sec_num": "4.2"
},
{
"text": "9 https://unimorph.github.io. 10 Inuit languages, which are genetically related to the languages of Siberia, are included in the lexicon. could potentially result in corpus artifacts. As with the issue of linguistic diversity, we acknowledge that the resource has some limitations but claim that it is the best currently available dataset for this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NorthEuraLex Data",
"sec_num": "4.2"
},
{
"text": "Splitting the Data. We split the data at the concept level into 10 folds, used for cross validation. We create train-dev-test splits where the training portion has 8 folds (\u2248 812 concepts) and the dev and test portions have 1 fold each (\u2248 102 concepts). We then create language-specific sets with the language-specific words for the concept to be rendered. Cross-validation allows us to have all 1016 concepts in our test sets (although evaluated using different model instances), and we do our following studies using all of them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NorthEuraLex Data",
"sec_num": "4.2"
},
{
"text": "In addition to naturally occurring languages, we are also interested in artificial ones. Why? We wish to validate our models in a controlled setting, quantifying the contribution of specific linguistic phenomena to our complexity measure. Thus, developing artificial languages, which only differ with respect to one phonological property, is useful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Languages",
"sec_num": "4.3"
},
{
"text": "The Role of Final-Obstruent Devoicing. Finalobstruent devoicing reduces phonological complexity under our information-theoretic metric. The reason is simple: There are fewer valid syllables as all those with voiced final obstruents are ruled out. Indeed, this point is also true of the syllable counting metric discussed in \u00a72.2. One computational notion of complexity might say that the complexity of the phonology is equal to the number of states required to encode the transduction from an underlying form to a surface form in a minimal finite-state transduction. Note that all Sound Pattern of English (SPE)-style rules may be so encoded (Kaplan and Kay, 1994) . Thus, the complexity of the phonotactics could be said to be related to the number of SPE-style rules that operate. In contrast, under our metric, any process that constrains the number of possibilities will, inherently, reduce complexity. The studies in \u00a75.3 allow us to examine the magnitude of such a reduction, and validate our models with respect to this expected behavior. We create two artificial datasets without finalobstruent devoicing based on the German and Dutch portions of NorthEuraLex. We reverse the process, using the orthography as a guide. For example, the German /tsu:k/ is converted to /tsu:g/ based on the orthography Zug.",
"cite_spans": [
{
"start": 642,
"end": 664,
"text": "(Kaplan and Kay, 1994)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Languages",
"sec_num": "4.3"
},
{
"text": "The Role of Vowel Harmony. Like final obstruent devoicing, vowel harmony plays a role in reducing the number of licit syllables. In contrast to final obstruent devoicing, however, vowel harmony acts cross-syllabically. Consider the Turkish lexicon, where most, but not all, basic lexical items obey vowel harmony. Processes like this reduce the entropy of p lex and, thus, can be considered as creating a less complex phonotactics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Languages",
"sec_num": "4.3"
},
{
"text": "For vowel harmony, we create 10 artificial datasets by randomly replacing each vowel in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Artificial Languages",
"sec_num": "4.3"
},
{
"text": "Pearson r Spearman \u03c1 a word with a new sampled (with replacement) vowel from that language's vowel inventory. This breaks all vowel harmony, but keeps the syllabic structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measure",
"sec_num": null
},
{
"text": "As stated earlier, Pellegrino et al. (2011) investigated a complexity trade-off with the information density of speech. From a 7-language study they found a strong correlation (R = \u22120.94) between the information density and the syllabic complexity of a language. One hypothesis adduced to explain these findings is that, for functional reasons, the rate of linguistic information is very similar cross-linguistically. Inspired by their study, we conduct a similar study with our phonotactic setup. We hypothesize that the bits per phoneme for a given concept correlates with the number of phonemes in the word. Moreover, the bits per word should be similar across languages. We consider the relation between the average bits per phoneme of a held-out portion of a language's lexicon, as measured by our best language model, and the average length of the words in that language. We present the results in Figures 2 and 3 and in Table 3 . We find a strong correlation under the LSTM LM (Spearman's \u03c1 = \u22120.744 with p < 10 \u221219 ). At the same time, we see only a weak correlation under conventional measures of phonotactic complexity, such as vowel inventory size (Spearman's \u03c1 = \u22120.162 with p = 0.098). In Figure 4 , we plot the kernel density estimate and histogram densities (both 10 and 100 bins) of word-level complexity (bits per word).",
"cite_spans": [
{
"start": 19,
"end": 43,
"text": "Pellegrino et al. (2011)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [
{
"start": 904,
"end": 919,
"text": "Figures 2 and 3",
"ref_id": "FIGREF1"
},
{
"start": 927,
"end": 934,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1202,
"end": 1210,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Study 1: Bits Per Phoneme Negatively Correlates with Word Length",
"sec_num": "5.1"
},
{
"text": "One possible confound for these results is that phonemes later in a word may in general have higher probability given the previous phonemes than those earlier in the string. This sort of positional effect was demonstrated in Dutch (van Son and Pols, 2003) , where position in the word accounted for much of the variance in segmental information. 11 To ensure that we are not sim- ply replicating such a positional effect across many languages, we performed several additional analyses.",
"cite_spans": [
{
"start": 231,
"end": 255,
"text": "(van Son and Pols, 2003)",
"ref_id": "BIBREF59"
},
{
"start": 346,
"end": 348,
"text": "11",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Study 2: Possible Confounds for Negative Correlations",
"sec_num": "5.2"
},
{
"text": "Truncated Words. First, we calculated the bitsper-phoneme for just the first three positions in the word, and then looked at the correlation between this word-onset bits per phoneme and the average (full) word length in phoneme segments. In other words, for the purpose of calculating bits-perphoneme, we truncated all words to a maximum of three phonemes, and in such a way explicitly eliminated the contribution of positions later in any word. Using the LSTM model, this yielded a Spearman correlation of \u03c1 = \u22120.469 (p < 10 \u22127 ) , in contrast to \u03c1 = \u22120.744 without such truncation (reported in Table 3 ). This suggests that there is a contribution of later positions to the effect presented in Table 3 that we lose by eliding them, but that even in the earlier positions of the word we are seeing a trade-off with full average word length.",
"cite_spans": [],
"ref_spans": [
{
"start": 596,
"end": 603,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 696,
"end": 703,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Study 2: Possible Confounds for Negative Correlations",
"sec_num": "5.2"
},
{
"text": "Correlation with phoneme position. We next looked to measure a position effect directly, by calculating the correlation between word position and bits for that position across all languages. Here we find a Spearman correlation of \u03c1 = \u22120.429 (p < 10 \u2212200 ), which again supports the contention that later positions in general require fewer bits to encode. Nonetheless, this correlation is rather simply analyzed raw relative frequency over their Dutch corpus. As a result, all positions beyond any word onset that is unique in their corpus would have probability 1, leading to a more extreme position effect than we would observe using regularization and validating on unseen forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Study 2: Possible Confounds for Negative Correlations",
"sec_num": "5.2"
},
{
"text": "Permuted ''Language'' Correlations. Finally, to determine if our language effects perhaps arise due to the averaging of word lengths and bits per phoneme for each language, we ran a permutation test on languages. We shuffle words (with their precalculated bits per phoneme values) into 106 sets with the same size as the original languages-thus creating fake ''languages''. We take the average word length and bits per phoneme in each of these fake languages and compare the correlationreturning to the ''language'' level this time-with the original correlation. After running this test for 10 4 permutations, we found no shuffled set with an equal or higher Spearman (or Pearson) correlation than the real set. Thus, with a strong confidence (p < 10 \u22124 ) we can state there is a language level effect. Average and minimum negative correlations for these ''fake'' languages (as well as the real set for ease of comparison) are presented in the lower half of Table 4. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Study 2: Possible Confounds for Negative Correlations",
"sec_num": "5.2"
},
{
"text": "Final-obstruent devoicing and vowel harmony reduce the number of licit syllables in a language, hence reducing the entropy. To determine the magnitude that such effects can have on the measure for our different model types, we conduct two studies. In the first, we remove final-obstruent devoicing from the German and Dutch languages in NorthEuraLex, as discussed in \u00a74.3. In the second study, we remove vowel harmony from 10 languages that have it, 12 as also explained in \u00a74.3. After deriving two artificial languages without obstruent devoicing from both German and Dutch, we used 10-fold cross validation to train models for each language. The statistical relevance of differences between normal and artificial languages was analyzed using paired permutation tests between the pairs. Results are presented in Table 5 . We see that the n-gram can capture this change in complexity for Dutch, but not for German. At the same time, the LSTM shows a statistically significant increase of \u2248 0.034 bits per phoneme when we remove obstruent devoicing from both languages. Figure 5 presents a similar impact on complexity from vowel harmony removal, as evidenced by the fact that all points fall above the equality line. Average complexity increased by \u2248 0.62 bits per phoneme (an approximate 16% entropy increase), as measured by our LSTM models.",
"cite_spans": [],
"ref_spans": [
{
"start": 813,
"end": 820,
"text": "Table 5",
"ref_id": null
},
{
"start": 1069,
"end": 1077,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Study 3: Constraining Languages Reduces Phonotactic Complexity",
"sec_num": "5.3"
},
{
"text": "Pearson r Spearman \u03c1 In both of these artificial language scenarios, the LSTM models appeared more sensitive to the constraint removal, as expected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measure",
"sec_num": null
},
{
"text": "Moran and Blasi (2014) investigated the correlation between the number of phonological units in a language and its average word length across a large and varied set of languages. They found that, although these measures of phonotactic complexity (number of vowels, consonants or phonemes in a language) are correlated with word length when measured across a varied set of languages, such a correlation usually does not hold within language families. We hypothesize that this is due to their measures being rather coarse approximations to phonotactic complexity, so that only large changes in the language would show significant correlation given the noise. We also hypothesize that our complexity measure is less noisy, hence should be able to yield significant correlations both within and across families. Results in Table 3 show a strong correlation for the LSTM measure, while they show a weak one for conventional measures of complexity. As stated before, Moran and Blasi (2014) found that vowel inventory size shows a strong correlation to word length on a diverse set of languages, but, as mentioned in \u00a74.2, our dataset is more limited than desired. To test if we can mitigate this effect we average the complexity measures and word length per family (instead of per language) and calculate the same correlations again. These results are Table 7 : Spearman correlation between complexity measures and average word length per language family. Phonotactic complexity in bits per phoneme presents very strong intra-family correlation with word length in three of the five families. Size of vowel inventory presents intra-family correlation in Turkic and Uralic. presented in Table 6 and show that when we average these complexity measures per family we indeed find a stronger correlation between vowel inventory size and average word length, although with a higher null hypothesis probability (Spearman's \u03c1 = \u22120.367 with p = 0.111).",
"cite_spans": [
{
"start": 961,
"end": 983,
"text": "Moran and Blasi (2014)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [
{
"start": 819,
"end": 826,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1346,
"end": 1353,
"text": "Table 7",
"ref_id": null
},
{
"start": 1680,
"end": 1687,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Study 4: Negative Trade-off Persists Within and Across Families",
"sec_num": "5.4"
},
{
"text": "We also see our LSTM based measure still shows a strong correlation (Spearman's \u03c1 = \u22120.526 with p = 0.017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Study 4: Negative Trade-off Persists Within and Across Families",
"sec_num": "5.4"
},
{
"text": "We now analyze these correlations intra families, for all family languages in our dataset with at least 4 languages. These results are presented in Table 7 . Our LSTM based phonotactic complexity measure shows strong intra-family correlation with average word length for all five analyzed language families (\u22120.662 \u2265 \u03c1 \u2265 \u22121.0 with p < 0.1). At the same time, vowel inventory size only shows a negative statistically significant correlation within Turkic.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 155,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Study 4: Negative Trade-off Persists Within and Across Families",
"sec_num": "5.4"
},
{
"text": "Representations Do Not Generally Improve Models Table 3 presents strong correlations when using an LSTM with standard one-hot lookup embedding.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Study 5: Explicit Feature",
"sec_num": "5.5"
},
{
"text": "Here we train LSTMs with three different phoneme embedding models: (1) a typical Lookup embedding, in which each Phoneme has an associated embedding;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Study 5: Explicit Feature",
"sec_num": "5.5"
},
{
"text": "(2) a phoneme features based embedding, as explained in \u00a74.1;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Study 5: Explicit Feature",
"sec_num": "5.5"
},
{
"text": "(3) the concatenation of the Lookup and the Phoneme embedding. We also train these models both using independent models for each language, and with independent We first analyze these model variants under the same lens as used in Study 1. Table 8 shows the correlations between the complexity measure resulting from each of this models and the average number of phonemes in a word. We find strong correlations for all of them (\u22120.740 \u2265 \u03c1 \u2265 \u22120.752 with p < 10 \u221218 ). We also present in Table 8 these models' cross entropy, averaged across all languages. At least for the methods that we are using here, we derived no benefit from either more explicit featural representations of the phonemes or by sharing the embeddings across languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 245,
"text": "Table 8",
"ref_id": "TABREF10"
},
{
"start": 484,
"end": 491,
"text": "Table 8",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Study 5: Explicit Feature",
"sec_num": "5.5"
},
{
"text": "We also investigated scenarios using less training data, and it was only in very sparse scenarios (e.g., using just 10% of the training used in our standard trials, or 81 example words) where we observed even a small benefit to explicit feature representations and shared embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Study 5: Explicit Feature",
"sec_num": "5.5"
},
{
"text": "We have presented methods for calculating a wellmotivated measure of phonotactic complexity: bits per phoneme. This measure is derived from information theory and its value is calculated using the probability distribution of a language model. We demonstrate that cross-linguistic comparison is straightforward using such a measure, and find a strong negative correlation with average word length. This trade-off with word length can be seen as an example of complexity compensation or perhaps related to communicative capacity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "See alsoCoup\u00e9 et al. (2019), where syllable-based bigram models are used to establish a comparable information rate in speech across 17 typologically diverse languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Code to train these models and reproduce results is available at https://github.com/tpimentelms/ phonotactic-complexity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For convenience, we just use standard orthography to represent actual and possible words, rather than phoneme strings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Hayes and Wilson (2008) label \u03a3 * as \u2126.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We briefly note that the van Son and Pols (2003) study did not make use of a train/dev/test split of their data, but",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The languages with vowel harmony are: bua, ckt, evn, fin, hun, khk, mhr, mnc, myv, tel, and tur.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Dami\u00e1n E. Blasi for his feedback on previous versions of this paper and the anonymous reviewers, as well as action editor Eric Fosler-Lussier, for their constructive and detailed comments-the paper is much improved as a result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Pearson r Spearman \u03c1 Per-Word Correlations. We also calculated the correlation between word length and bits per phoneme across all languages (without averaging per language here). The Spearman correlation between these factors-at the word level using all languages-is \u03c1 = \u22120.312 (p < 10 \u221219 ). Analyzing each language individually, there is an average Spearman's \u03c1 = \u22120.257 (p < 10 \u221219 ) between bits per phoneme and word length. The minimum negative (i.e., highest magnitude) correlation of any language in the set is \u03c1 = \u22120.607. These per word correlations are reported in the upper half of Table 4. ",
"cite_spans": [],
"ref_spans": [
{
"start": 593,
"end": 601,
"text": "Table 4.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Measure",
"sec_num": null
}
],
"bib_entries": {
"BIBREF2": {
"ref_id": "b2",
"title": "An estimate of an upper bound for the entropy of English",
"authors": [
{
"first": "",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "1",
"pages": "31--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lai. 1992. An estimate of an upper bound for the entropy of English. Computational Linguistics, 18(1):31-40.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Toward the logical description of languages in their phonemic aspect. Language",
"authors": [
{
"first": "E",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Morris",
"middle": [],
"last": "Halle",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Jakobson",
"suffix": ""
}
],
"year": 1953,
"venue": "",
"volume": "",
"issue": "",
"pages": "34--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Colin Cherry, Morris Halle, and Roman Jakobson. 1953. Toward the logical description of languages in their phonemic aspect. Lan- guage, pages 34-46.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Some controversial questions in phonological theory",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
},
{
"first": "Morris",
"middle": [],
"last": "Halle",
"suffix": ""
}
],
"year": 1965,
"venue": "Journal of Linguistics",
"volume": "1",
"issue": "2",
"pages": "97--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky and Morris Halle. 1965. Some controversial questions in phonological theory. Journal of Linguistics, 1(2):97-138.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Phonological neighbourhoods in the developing lexicon",
"authors": [
{
"first": "Jeffry",
"middle": [
"A"
],
"last": "Coady",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"N"
],
"last": "Aslin",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Child Language",
"volume": "30",
"issue": "2",
"pages": "441--469",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffry A. Coady and Richard N. Aslin. 2003. Phonological neighbourhoods in the devel- oping lexicon. Journal of Child Language, 30(2):441-469.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Young children's sensitivity to probabilistic phonotactics in the developing lexicon",
"authors": [
{
"first": "Jeffry",
"middle": [
"A"
],
"last": "Coady",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"N"
],
"last": "Aslin",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Experimental Child Psychology",
"volume": "89",
"issue": "3",
"pages": "183--213",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffry A. Coady and Richard N. Aslin. 2004. Young children's sensitivity to probabilis- tic phonotactics in the developing lexicon. Journal of Experimental Child Psychology, 89(3):183-213.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Probabilistic typology: Deep generative models of vowel inventories",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1182--1192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell and Jason Eisner. 2017. Probabilistic typology: Deep generative models of vowel inventories. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1182-1192. Vancouver, Canada. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Are all languages equally hard to language-model?",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [
"J"
],
"last": "Mielke",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "536--541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Sebastian J. Mielke, Jason Eisner, and Brian Roark. 2018. Are all languages equally hard to language-model? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, Volume 2 (Short Papers), pages 536-541. New Orleans, Louisiana. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Different languages, similar encoding efficiency: Comparable information rates across the human communicative niche",
"authors": [
{
"first": "Christophe",
"middle": [],
"last": "Coup\u00e9",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Dediu",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Pellegrino",
"suffix": ""
}
],
"year": 2019,
"venue": "Science Advances",
"volume": "5",
"issue": "9",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christophe Coup\u00e9, Yoon Oh, Dan Dediu, and Fran\u00e7ois Pellegrino. 2019. Different languages, similar encoding efficiency: Comparable infor- mation rates across the human communicative niche. Science Advances, 5(9):eaaw2594.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Elements of Information Theory",
"authors": [
{
"first": "M",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Joy",
"middle": [
"A"
],
"last": "Cover",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thomas",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas M. Cover and Joy A. Thomas. 2012. Elements of Information Theory, John Wiley & Sons.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Words cluster phonetically beyond phonotactic regularities",
"authors": [
{
"first": "Isabelle",
"middle": [],
"last": "Dautriche",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Mahowald",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Gibson",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Christophe",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"T"
],
"last": "Piantadosi",
"suffix": ""
}
],
"year": 2017,
"venue": "Cognition",
"volume": "163",
"issue": "",
"pages": "128--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isabelle Dautriche, Kyle Mahowald, Edward Gibson, Anne Christophe, and Steven T. Piantadosi. 2017. Words cluster phonetically beyond phonotactic regularities. Cognition, 163:128-145.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Compiling the Uralic dataset for NorthEuraLex, a lexicostatistical database of northern Eurasia",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Dellert",
"suffix": ""
}
],
"year": 2015,
"venue": "First International Workshop on Computational Linguistics for Uralic Languages",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Dellert. 2015. Compiling the Uralic dataset for NorthEuraLex, a lexicostatistical database of northern Eurasia. In First Inter- national Workshop on Computational Linguis- tics for Uralic Languages.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Information-Theoretic Causal Inference of Lexical Flow",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Dellert",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Dellert. 2017. Information-Theoretic Causal Inference of Lexical Flow. Ph.D. thesis, University of T\u00fcbingen.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "NorthEuraLex (version 0",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Dellert",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Dellert and Gerhard J\u00e4ger. 2017. NorthEuraLex (version 0.9). http:// northeuralex.org/",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A generative model of phonotactics",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Futrell",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Albright",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"J"
],
"last": "O'donnell",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "73--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Futrell, Adam Albright, Peter Graff, and Timothy J. O'Donnell. 2017. A generative model of phonotactics. Transactions of the Association for Computational Linguistics, 5:73-86.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Phonotactic probability influences speech production",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Goldrick",
"suffix": ""
},
{
"first": "Meredith",
"middle": [],
"last": "Larson",
"suffix": ""
}
],
"year": 2008,
"venue": "Cognition",
"volume": "107",
"issue": "3",
"pages": "1155--1164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Goldrick and Meredith Larson. 2008. Phonotactic probability influences speech pro- duction. Cognition, 107(3):1155-1164.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning OT constraint rankings using a maximum entropy model",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Stockholm Workshop on Variation within Optimality Theory",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater and Mark Johnson. 2003. Learning OT constraint rankings using a max- imum entropy model. In Proceedings of the Stockholm Workshop on Variation within Op- timality Theory.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Interpolating between types and tokens by estimating power-law generators",
"authors": [
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
}
],
"year": 2006,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "459--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharon Goldwater, Mark Johnson, and Thomas L. Griffiths. 2006. Interpolating between types and tokens by estimating power-law generators. In Advances in Neural Information Processing Systems, pages 459-466.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Generative phonotactics",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Gorman",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyle Gorman. 2013. Generative phonotactics. Ph.D. thesis, University of Pennsylvania.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Language universals, with special reference to feature hierarchies",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Greenberg",
"suffix": ""
}
],
"year": 1966,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Greenberg. 1966. Language universals, with special reference to feature hierarchies. Mouton, The Hague.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A probabilistic Earley parser as a psycholinguistic model",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hale",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 2nd meeting of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In Proceedings of the 2nd meeting of the North American Chapter of the Association for Computational Linguistics, pages 1-8.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The role of predictability in shaping phonological patterns",
"authors": [
{
"first": "Kathleen Currie",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Hume",
"suffix": ""
},
{
"first": "T",
"middle": [
"Florian"
],
"last": "Jaeger",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Wedel",
"suffix": ""
}
],
"year": 2018,
"venue": "Linguistics Vanguard",
"volume": "",
"issue": "s2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathleen Currie Hall, Elizabeth Hume, T. Florian Jaeger, and Andrew Wedel. 2018. The role of predictability in shaping phonological patterns. Linguistics Vanguard, 4(s2).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The Sound Pattern of Russian",
"authors": [
{
"first": "Morris",
"middle": [],
"last": "Halle",
"suffix": ""
}
],
"year": 1959,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morris Halle. 1959. The Sound Pattern of Russian, Mouton, The Hague.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Knowledge unlearned and untaught: What speakers know about the sounds of their language",
"authors": [
{
"first": "Morris",
"middle": [],
"last": "Halle",
"suffix": ""
}
],
"year": 1978,
"venue": "Linguistic Theory and Psychological Reality",
"volume": "",
"issue": "",
"pages": "294--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morris Halle. 1978, Knowledge unlearned and untaught: What speakers know about the sounds of their language. In M. Halle, J. Bresnan, and G. Miller, editors, Linguistic Theory and Psychological Reality, pages 294-303, The MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A maximum entropy model of phonotactics and phonotactic learning",
"authors": [
{
"first": "Bruce",
"middle": [],
"last": "Hayes",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2008,
"venue": "Linguistic Inquiry",
"volume": "39",
"issue": "3",
"pages": "379--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruce Hayes and Colin Wilson. 2008. A maximum entropy model of phonotactics and phonotactic learning. Linguistic Inquiry, 39(3):379-440.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A manual of phonology",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Francis",
"suffix": ""
},
{
"first": "Hockett",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1955,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Francis Hockett. 1955. A manual of pho- nology, Waverly Press, Baltimore, MD.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A course in modern linguistics",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Francis",
"suffix": ""
},
{
"first": "Hockett",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1958,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Francis Hockett. 1958. A course in modern linguistics, Macmillan, New York.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The role of entropy and surprisal in phonologization and language change",
"authors": [
{
"first": "Elizabeth",
"middle": [],
"last": "Hume",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Mailhot",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "29--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elizabeth Hume and Fr\u00e9d\u00e9ric Mailhot. 2013. The role of entropy and surprisal in phonologization and language change. In Alan C.L. Yu, editor, Origins of sound change: Approaches to pho- nologization, pages 29-47, Oxford University Press, Oxford.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Generalized weighted Chinese restaurant processes for species sampling mixture models",
"authors": [
{
"first": "Hemant",
"middle": [],
"last": "Ishwaran",
"suffix": ""
},
{
"first": "Lancelot",
"middle": [
"F"
],
"last": "James",
"suffix": ""
}
],
"year": 2003,
"venue": "Statistica Sinica",
"volume": "",
"issue": "",
"pages": "1211--1235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hemant Ishwaran and Lancelot F. James. 2003. Generalized weighted Chinese restaurant pro- cesses for species sampling mixture models. Statistica Sinica, pages 1211-1235.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Maximum entropy models and stochastic optimality theory. Architectures, Rules, and Preferences: Variations on Themes by",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerhard J\u00e4ger. 2007. Maximum entropy models and stochastic optimality theory. Architec- tures, Rules, and Preferences: Variations on Themes by Joan W. Bresnan. Stanford: CSLI, pages 467-479.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Interpolated estimation of Markov source parameters from sparse data",
"authors": [
{
"first": "Frederick",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 1980,
"venue": "Proceedings of the Workshop on Pattern Recognition in Practice",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frederick Jelinek. 1980. Interpolated estimation of Markov source parameters from sparse data. In Proceedings of the Workshop on Pattern Recognition in Practice, 1980.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Regular models of phonological rule systems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "3",
"pages": "331--378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald M. Kaplan and Martin Kay. 1994. Regular models of phonological rule systems. Computational Linguistics, 20(3):331-378.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Sharp nearby, fuzzy far away: How neural language models use context",
"authors": [
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "284--294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. 2018. Sharp nearby, fuzzy far away: How neural language models use context. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 284-294, Melbourne, Australia. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "UniMorph 2.0: Universal morphology",
"authors": [
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "G\u00e9raldine",
"middle": [],
"last": "Walther",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Vylomova",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [
"J"
],
"last": "Mielke",
"suffix": ""
},
{
"first": "Arya",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christo Kirov, Ryan Cotterell, John Sylak- Glassman, G\u00e9raldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sebastian J. Mielke, Arya McCarthy, Sandra K\u00fcbler, David Yarowsky, Jason Eisner, and Mans Hulden. 2018. UniMorph 2.0: Uni- versal morphology. In Proceedings of the 11th Language Resources and Evaluation Conference.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Expectation-based syntactic comprehension",
"authors": [
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2008,
"venue": "Cognition",
"volume": "106",
"issue": "3",
"pages": "1126--1177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger Levy. 2008. Expectation-based syntactic comprehension. Cognition, 106(3):1126-1177.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Phonetic universals in consonant systems",
"authors": [
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Lindblom",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Maddieson",
"suffix": ""
}
],
"year": 1988,
"venue": "Speech, and Mind",
"volume": "",
"issue": "",
"pages": "62--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bj\u00f6rn Lindblom and Ian Maddieson. 1988. Pho- netic universals in consonant systems. Lan- guage, Speech, and Mind, pages 62-78.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Correlating phonological complexity: Data and validation",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Maddieson",
"suffix": ""
}
],
"year": 2006,
"venue": "Linguistic Typology",
"volume": "10",
"issue": "1",
"pages": "106--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Maddieson. 2006. Correlating phonological complexity: Data and validation. Linguistic Typology, 10(1):106-123.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Calculating phonological complexity, Fran\u00e7ois Pellegrino, Ioana Chitoran, Egidio Marsico, and Christophe Coupe, editors, Approaches to phonological complexity",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Maddieson",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "85--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Maddieson. 2009. Calculating phonolog- ical complexity, Fran\u00e7ois Pellegrino, Ioana Chitoran, Egidio Marsico, and Christophe Coupe, editors, Approaches to phonological complexity, pages 85-110. Mouton de Gruyter, Berlin, Germany.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Patterns of Sounds",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Maddieson",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"Ferrari"
],
"last": "Disner",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Maddieson and Sandra Ferrari Disner. 1984. Patterns of Sounds, Cambridge University Press.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Word forms are structured for efficient use",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Mahowald",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Dautriche",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Gibson",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"T"
],
"last": "Piantadosi",
"suffix": ""
}
],
"year": 2018,
"venue": "Cognitive Science",
"volume": "42",
"issue": "8",
"pages": "3116--3134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyle Mahowald, Isabelle Dautriche, Edward Gibson, and Steven T. Piantadosi. 2018. Word forms are structured for efficient use. Cognitive Science, 42(8):3116-3134.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "\u00c9conomie des changements phon\u00e9tiques",
"authors": [
{
"first": "Andr\u00e9",
"middle": [],
"last": "Martinet",
"suffix": ""
}
],
"year": 1955,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andr\u00e9 Martinet. 1955.\u00c9conomie des changements phon\u00e9tiques,\u00c9ditions A. Francke S. A.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "The world's simplest grammars are creole grammars",
"authors": [
{
"first": "John",
"middle": [],
"last": "Mcwhorter",
"suffix": ""
}
],
"year": 2001,
"venue": "Linguistic Typology",
"volume": "5",
"issue": "2",
"pages": "125--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John McWhorter. 2001. The world's simplest grammars are creole grammars. Linguistic Typology, 5(2):125-66.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "An analysis of neural language modeling at multiple scales",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Shirish Keskar",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.08240"
]
},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. An analysis of neural language modeling at multiple scales. arXiv preprint arXiv:1803.08240.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "What kind of language is hard to language-model?",
"authors": [
{
"first": "J",
"middle": [],
"last": "Sebastian",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mielke",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Gorman",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Roark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4975--4989",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian J. Mielke, Ryan Cotterell, Kyle Gorman, Brian Roark, and Jason Eisner. 2019. What kind of language is hard to language-model? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4975-4989, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "On the feasibility of complexity metrics",
"authors": [
{
"first": "Matti",
"middle": [],
"last": "Miestamo",
"suffix": ""
}
],
"year": 2006,
"venue": "FinEst Linguistics, Proceedings of the Annual Finnish and Estonian Conference of Linguistics",
"volume": "",
"issue": "",
"pages": "11--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matti Miestamo. 2006. On the feasibility of complexity metrics. In FinEst Linguistics, Pro- ceedings of the Annual Finnish and Estonian Conference of Linguistics, pages 11-26.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Grammatical complexity in a cross-linguistic perspective",
"authors": [
{
"first": "Matti",
"middle": [],
"last": "Miestamo",
"suffix": ""
}
],
"year": 2008,
"venue": "Language complexity: Typology, contact, change",
"volume": "",
"issue": "",
"pages": "23--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matti Miestamo. 2008, Grammatical complex- ity in a cross-linguistic perspective. In Matti Miestamo, Kaius Sinnemaki, and Fred Karlsson, editors, Language complexity: Typology, con- tact, change, pages 23-41. John Benjamins, Amsterdam, The Netherlands.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Luk\u00e1\u0161",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Jan\u010dernock\u1ef3",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2010,
"venue": "Eleventh Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Mikolov, Martin Karafi\u00e1t, Luk\u00e1\u0161 Burget, Jan\u010cernock\u1ef3, and Sanjeev Khudanpur. 2010. Recurrent neural network based lan- guage model. In Eleventh Annual Conference of the International Speech Communication Association.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Crosslinguistic comparison of complexity measures in phonological systems",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Dami\u00e1n",
"middle": [],
"last": "Blasi",
"suffix": ""
}
],
"year": 2014,
"venue": "Measuring grammatical complexity",
"volume": "",
"issue": "",
"pages": "217--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Moran and Dami\u00e1n Blasi. 2014, Cross- linguistic comparison of complexity measures in phonological systems, Frederick J. Newmeyer and Laurel B. Preston, editors, Mea- suring grammatical complexity, pages 217-240. Oxford University Press, Oxford, UK.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Segmental inventory size, word length, and communicative efficiency",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Nettle",
"suffix": ""
}
],
"year": 1995,
"venue": "Linguistics",
"volume": "33",
"issue": "",
"pages": "359--367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Nettle. 1995. Segmental inventory size, word length, and communicative efficiency. Linguistics, 33:359-367.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "A crosslanguage perspective on speech information rate",
"authors": [
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Pellegrino",
"suffix": ""
},
{
"first": "Ioana",
"middle": [],
"last": "Chitoran",
"suffix": ""
},
{
"first": "Egidio",
"middle": [],
"last": "Marsico",
"suffix": ""
},
{
"first": "Christophe",
"middle": [],
"last": "Coup\u00e9",
"suffix": ""
}
],
"year": 2011,
"venue": "Language",
"volume": "87",
"issue": "3",
"pages": "539--558",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fran\u00e7ois Pellegrino, Ioana Chitoran, Egidio Marsico, and Christophe Coup\u00e9. 2011. A cross- language perspective on speech information rate. Language, 87(3):539-558.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Word lengths are optimized for efficient communication",
"authors": [
{
"first": "T",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Harry",
"middle": [],
"last": "Piantadosi",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Tily",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gibson",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "108",
"issue": "",
"pages": "3526--3529",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven T. Piantadosi, Harry Tily, and Edward Gibson. 2011. Word lengths are optimized for efficient communication. Proceedings of the Na- tional Academy of Sciences, 108(9):3526-3529.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "The communicative function of ambiguity in language",
"authors": [
{
"first": "T",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Harry",
"middle": [],
"last": "Piantadosi",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Tily",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gibson",
"suffix": ""
}
],
"year": 2012,
"venue": "Cognition",
"volume": "122",
"issue": "3",
"pages": "280--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven T. Piantadosi, Harry Tily, and Edward Gibson. 2012. The communicative function of ambiguity in language. Cognition,122(3):280-291.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "The communicative lexicon hypothesis",
"authors": [
{
"first": "T",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Harry",
"middle": [
"J"
],
"last": "Piantadosi",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Tily",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gibson",
"suffix": ""
}
],
"year": 2009,
"venue": "The 31st Annual Meeting of the Cognitive Science Society (CogSci09)",
"volume": "",
"issue": "",
"pages": "2582--2587",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven T. Piantadosi, Harry J. Tily, and Edward Gibson. 2009. The communicative lexicon hypothesis. In The 31st Annual Meeting of the Cognitive Science Society (CogSci09), pages 2582-2587.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "The interdependence of frequency, predictability, and informativity in the segmental domain",
"authors": [
{
"first": "Uriel",
"middle": [],
"last": "Cohen Priva",
"suffix": ""
},
{
"first": "T. Florian",
"middle": [],
"last": "Jaeger",
"suffix": ""
}
],
"year": 2018,
"venue": "Linguistics Vanguard",
"volume": "",
"issue": "s2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Uriel Cohen Priva and T. Florian Jaeger. 2018. The interdependence of frequency, predictability, and informativity in the segmental domain. Linguistics Vanguard, 4(s2).",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Correlating complexity: A typological approach",
"authors": [
{
"first": "K",
"middle": [],
"last": "Ryan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shosted",
"suffix": ""
}
],
"year": 2006,
"venue": "Linguistic Typology",
"volume": "10",
"issue": "1",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan K. Shosted. 2006. Correlating complexity: A typological approach. Linguistic Typology, 10(1):1-40.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Practical Bayesian optimization of machine learning algorithms",
"authors": [
{
"first": "Jasper",
"middle": [],
"last": "Snoek",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Ryan",
"middle": [
"P"
],
"last": "Adams",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2951--2959",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. 2012. Practical Bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems, pages pages 2951-2959.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Information structure and efficiency in speech production",
"authors": [
{
"first": "R",
"middle": [
"J J H"
],
"last": "Van Son",
"suffix": ""
},
{
"first": "C",
"middle": [
"W"
],
"last": "Louis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pols",
"suffix": ""
}
],
"year": 2003,
"venue": "Eighth European Conference on Speech Communication and Technology (Eurospeech)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.J.J.H. van Son and Louis C.W. Pols. 2003. Information structure and efficiency in speech production. In Eighth European Conference on Speech Communication and Technology (Eurospeech).",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Redundancy rules in phonology. Language",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Stanley",
"suffix": ""
}
],
"year": 1967,
"venue": "",
"volume": "",
"issue": "",
"pages": "393--436",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Stanley. 1967. Redundancy rules in pho- nology. Language, pages 393-436.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Learning new words: Phonotactic probability in language development",
"authors": [
{
"first": "L",
"middle": [],
"last": "Holly",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Storkel",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Speech, Language, and Hearing Research",
"volume": "44",
"issue": "6",
"pages": "1321--1337",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holly L. Storkel. 2001. Learning new words: Phonotactic probability in language develop- ment. Journal of Speech, Language, and Hear- ing Research, 44(6):1321-1337.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Learning new words II: Phonotactic probability in verb learning",
"authors": [
{
"first": "L",
"middle": [],
"last": "Holly",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Storkel",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Speech, Language, and Hearing Research",
"volume": "46",
"issue": "6",
"pages": "1312--1323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holly L. Storkel. 2003. Learning new words II: Phonotactic probability in verb learning. Journal of Speech, Language, and Hearing Re- search, 46(6):1312-1323.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Differentiating phonotactic probability and neighborhood density in adult word learning",
"authors": [
{
"first": "L",
"middle": [],
"last": "Holly",
"suffix": ""
},
{
"first": "Jonna",
"middle": [],
"last": "Storkel",
"suffix": ""
},
{
"first": "Tiffany",
"middle": [
"P"
],
"last": "Armbr\u00fcster",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hogan",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Speech, Language, and Hearing Research",
"volume": "49",
"issue": "6",
"pages": "1175--1192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holly L. Storkel, Jonna Armbr\u00fcster, and Tiffany P. Hogan. 2006. Differentiating phonotactic prob- ability and neighborhood density in adult word learning. Journal of Speech, Language, and Hearing Research, 49(6):1175-1192.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "An online calculator to compute phonotactic probability and neighborhood density on the basis of child corpora of spoken",
"authors": [
{
"first": "L",
"middle": [],
"last": "Holly",
"suffix": ""
},
{
"first": "Jill",
"middle": [
"R"
],
"last": "Storkel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hoover",
"suffix": ""
}
],
"year": 2010,
"venue": "American English. Behavior Research Methods",
"volume": "42",
"issue": "2",
"pages": "497--506",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holly L. Storkel and Jill R. Hoover. 2010. An online calculator to compute phono- tactic probability and neighborhood den- sity on the basis of child corpora of spoken American English. Behavior Research Meth- ods, 42(2):497-506.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "The independent effects of phonotactic probability and neighbourhood density on lexical acquisition by preschool children",
"authors": [
{
"first": "L",
"middle": [],
"last": "Holly",
"suffix": ""
},
{
"first": "Su-Yeon",
"middle": [],
"last": "Storkel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2011,
"venue": "Language and Cognitive Processes",
"volume": "26",
"issue": "2",
"pages": "191--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holly L. Storkel and Su-Yeon Lee. 2011. The independent effects of phonotactic probability and neighbourhood density on lexical acqui- sition by preschool children. Language and Cognitive Processes, 26(2):191-211.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "LSTM neural networks for language modeling",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Sundermeyer",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Schl\u00fcter",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2012,
"venue": "Thirteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Sundermeyer, Ralf Schl\u00fcter, and Hermann Ney. 2012. LSTM neural networks for language modeling. In Thirteenth Annual Conference of the International Speech Communication Association.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Towards greater accuracy in lexicostatistic dating",
"authors": [
{
"first": "Morris",
"middle": [],
"last": "Swadesh",
"suffix": ""
}
],
"year": 1955,
"venue": "International Journal of American Linguistics",
"volume": "21",
"issue": "2",
"pages": "121--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morris Swadesh. 1955. Towards greater accuracy in lexicostatistic dating. International Journal of American Linguistics, 21(2):121-137.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Grundz\u00fcge der phonologie, Van den Hoeck & Ruprecht",
"authors": [
{
"first": "Trubetzkoy",
"middle": [],
"last": "Nikola\u00ef Sergeyevich",
"suffix": ""
}
],
"year": 1938,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola\u00ef Sergeyevich Trubetzkoy. 1938. Grundz\u00fcge der phonologie, Van den Hoeck & Ruprecht, Gottigen, Germany.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Probabilistic phonotactics and neighborhood activation in spoken word recognition",
"authors": [
{
"first": "S",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"A"
],
"last": "Vitevitch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Luce",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of Memory and Language",
"volume": "40",
"issue": "3",
"pages": "374--408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael S. Vitevitch and Paul A. Luce. 1999. Probabilistic phonotactics and neighborhood activation in spoken word recognition. Journal of Memory and Language, 40(3):374-408.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "The Psycho-Biology of Language: An Introduction to Dynamic Philology",
"authors": [
{
"first": "George",
"middle": [],
"last": "Kingsley",
"suffix": ""
},
{
"first": "Zipf",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1935,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Kingsley Zipf. 1935. The Psycho-Biology of Language: An Introduction to Dynamic Phi- lology, MIT Press, Cambridge, MA.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Bits per phoneme vs average word length using an LSTM language model.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Per-phoneme complexity vs average word length under both a trigram and an LSTM language model.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Conventional measures of phonological complexity vs average word length. These complexity measures are based in inventory size.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Kernel density estimate (KDE) of the average phonotactic complexity per word across 106 different languages. Different languages tend to present similar complexities (bits per word).",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Complexities for natural and artifical languages when removing vowel harmony. A paired permutation test showed all differences present statistical difference with p < 0.01.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF2": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Sample of the lexicon in NorthEuraLex corpus.",
"num": null
},
"TABREF4": {
"content": "<table><tr><td>: Pearson and Spearman rank corre-</td></tr><tr><td>lation coefficients between complexity mea-</td></tr><tr><td>sures and average word length in phoneme</td></tr><tr><td>segments.</td></tr></table>",
"type_str": "table",
"html": null,
"text": "",
"num": null
},
"TABREF5": {
"content": "<table><tr><td>: Pearson and Spearman rank correlation</td></tr><tr><td>coefficients between complexity measures and</td></tr><tr><td>word length in phoneme segments. All corre-</td></tr><tr><td>lations are statistically significant with p &lt; 10 \u22128 .</td></tr></table>",
"type_str": "table",
"html": null,
"text": "",
"num": null
},
"TABREF7": {
"content": "<table><tr><td>: Pearson and Spearman correlation</td></tr><tr><td>between complexity measures and word length</td></tr><tr><td>in phoneme segments averaged across language</td></tr><tr><td>families.</td></tr></table>",
"type_str": "table",
"html": null,
"text": "",
"num": null
},
"TABREF8": {
"content": "<table><tr><td/><td colspan=\"3\">Spearman \u03c1</td><td/><td/></tr><tr><td>Family</td><td colspan=\"5\">LSTM Vowels # Langs</td></tr><tr><td>Dravidian Indo-European</td><td>\u22121.0 \u22120.662 *</td><td colspan=\"2\">\u22120.894 * \u22120.218</td><td/><td>4 37</td></tr><tr><td colspan=\"4\">Nakh-Daghestanian \u22120.771 \u2020 \u22120.530</td><td/><td>6</td></tr><tr><td>Turkic</td><td>\u22120.690</td><td colspan=\"2\">\u2020 \u22120.773</td><td>\u2020</td><td>8</td></tr><tr><td>Uralic</td><td>\u22120.874</td><td>*</td><td>0.363</td><td>\u2020</td><td>26</td></tr><tr><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"html": null,
"text": "Statistically significant with p < 0.01 \u2020 Statistically significant with p < 0.1",
"num": null
},
"TABREF10": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Average cross-entropy across all languages and the correlation between complexity and average word length for different models. models, but sharing embedding weights across languages.",
"num": null
}
}
}
}