Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C02-1016",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:18:54.527650Z"
},
"title": "Determining Recurrent Sound Correspondences by Inducing Translation Models",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Toronto",
"location": {
"postCode": "M5S 3G4",
"settlement": "Toronto",
"region": "Ontario",
"country": "Canada"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "I present a novel approach to the determination of recurrent sound correspondences in bilingual wordlists. The idea is to relate correspondences between sounds in wordlists to translational equivalences between words in bitexts (bilingual corpora). My method induces models of sound correspondence that are similar to models developed for statistical machine translation. The experiments show that the method is able to determine recurrent sound correspondences in bilingual wordlists in which less than 30% of the pairs are cognates. By employing the discovered correspondences, the method can identify cognates with higher accuracy than the previously reported algorithms.",
"pdf_parse": {
"paper_id": "C02-1016",
"_pdf_hash": "",
"abstract": [
{
"text": "I present a novel approach to the determination of recurrent sound correspondences in bilingual wordlists. The idea is to relate correspondences between sounds in wordlists to translational equivalences between words in bitexts (bilingual corpora). My method induces models of sound correspondence that are similar to models developed for statistical machine translation. The experiments show that the method is able to determine recurrent sound correspondences in bilingual wordlists in which less than 30% of the pairs are cognates. By employing the discovered correspondences, the method can identify cognates with higher accuracy than the previously reported algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Genetically related languages often exhibit recurrent sound correspondences (henceforth referred to simply as correspondences) in words with similar meaning. For example, t:d, \u00cc:t, n:n, and other known correspondences between English and Latin are demonstrated by the word pairs in Table 1 . Word pairs that contain such correspondences are called cognates, because they originate from the same protoform in the ancestor language. Correspondences in cognates are preserved over time thanks to the regularity of sound changes, which normally apply to sounds in a given phonological context across all words in the language.",
"cite_spans": [],
"ref_spans": [
{
"start": 282,
"end": 289,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The determination of correspondences is the principal step of the comparative method of language reconstruction. Not only does it provide evidence for the relatedness of languages, but it also makes it possible to distinguish cognates from loan words and chance resemblances. However, because manual determination of correspondences is an extremely time-consuming process, it has yet to be accomplished for many proposed language families. A system able to perform this task automatically from unprocessed bilingual wordlists could be of great assistance to historical linguists. The Reconstruction Engine (Lowe and Mazaudon, 1994) , a set of programs designed to be an aid in language reconstruction, requires a set of correspondences to be provided beforehand. The determination of correspondences is closely related to another task that has been much studied in computational linguistics, the identification of cognates. Cognates have been employed for sentence and word alignment in bitexts (Simard et al., 1992; , improving statistical machine translation models (Al-Onaizan et al., 1999) , and inducing translation lexicons (Koehn and Knight, 2001) . Some of the proposed cognate identification algorithms implicitly determine and employ correspondences (Tiedemann, 1999; Mann and Yarowsky, 2001) .",
"cite_spans": [
{
"start": 606,
"end": 631,
"text": "(Lowe and Mazaudon, 1994)",
"ref_id": "BIBREF9"
},
{
"start": 995,
"end": 1016,
"text": "(Simard et al., 1992;",
"ref_id": "BIBREF15"
},
{
"start": 1068,
"end": 1093,
"text": "(Al-Onaizan et al., 1999)",
"ref_id": "BIBREF0"
},
{
"start": 1130,
"end": 1154,
"text": "(Koehn and Knight, 2001)",
"ref_id": "BIBREF5"
},
{
"start": 1260,
"end": 1277,
"text": "(Tiedemann, 1999;",
"ref_id": "BIBREF17"
},
{
"start": 1278,
"end": 1302,
"text": "Mann and Yarowsky, 2001)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although it may not be immediately apparent, there is a strong similarity between the task of matching phonetic segments in a pair of cognate words, and the task of matching words in two sentences that are mutual translations ( Figure 1) . The consistency with which a word in one language is translated into a word in another language is mirrored by the consistency of sound correspondences. The former is due to the semantic relation of synonymy, while the latter follows from the principle of the regularity of sound change. Thus, as already asserted by Guy (1994) , it should be possible to use similar techniques for both tasks.",
"cite_spans": [
{
"start": 557,
"end": 567,
"text": "Guy (1994)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 228,
"end": 237,
"text": "Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The primary objective of the method proposed in this paper is the automatic determination of correspondences in bilingual wordlists, such as the one in Table 1 . The method exploits the idea of relating correspondences in bilingual wordlists to translational equivalence associations in bitexts through the employment of models developed in the context of statistical machine translation, The second task addressed in this paper is the identification of cognates on the basis of the discovered correspondences. The experiments to be described in Section 6 show that the method is capable of determining correspondences in bilingual wordlists in which less than 30% of the pairs are cognates, and outperforms comparable algorithms on cognate identification. Although the experiments focus on bilingual wordlists, the approach presented in this paper could potentially be applied to other bitext-related tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 152,
"end": 159,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In a schematic description of the comparative method, the two steps that precede the determination of correspondences are the identification of cognate pairs (Kondrak, 2001) , and their phonetic alignment (Kondrak, 2000) . Indeed, if a comprehensive set of correctly aligned cognate pairs is available, the correspondences could be extracted by simply following the alignment links. Unfortunately, in order to make reliable judgments of cognation, it is necessary to know in advance what the correspondences are. Historical linguists solve this apparent circularity by guessing a small number of likely cognates and refining the set of correspondences and cognates in an iterative fashion. Guy (1994) outlines an algorithm for identifying cognates in bilingual wordlists which is based on correspondences. The algorithm estimates the probability of phoneme correspondences by employing a variant of the \u03c7 2 statistic on a contingency table, which indicates how often two phonemes cooccur in words of the same meaning. The probabilities are then converted into the estimates of cognation by means of some experimentation-based heuristics. The paper does not contain any evaluation on authentic language data, but Guy's program COGNATE, which implements the algorithm, is publicly available. An experimental evaluation of COGNATE is described in Section 6.",
"cite_spans": [
{
"start": 158,
"end": 173,
"text": "(Kondrak, 2001)",
"ref_id": "BIBREF7"
},
{
"start": 205,
"end": 220,
"text": "(Kondrak, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 690,
"end": 700,
"text": "Guy (1994)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Oakes 2000describes a set of programs that together perform several steps of the comparative method, from the determination of correspondences in wordlists to the actual reconstruction of the protoforms. Word pairs are considered cognate if their edit distance is below a certain threshold. The edit operations cover a number of sound-change categories. Sound correspondences are deemed to be regular if they are found to occur more than once in the data. The paper describes experimental results of running the programs on a set of wordlists representing four Indonesian languages, and compares those to the reconstructions found in the linguistic literature. Section 6 contains an evaluation of one of the programs in the set, JAKARTA, on the cognate identification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In statistical machine translation, a translation model approximates the probability that two sentences are mutual translations by computing the product of the probabilities that each word in the target sentence is a translation of some source language word. A model of translation equivalence that determines the word translation probabilities can be induced from bitexts. The difficulty lies in the fact that the mapping, or alignment, of words between two parts of a bitext is not known in advance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models of translational equivalence",
"sec_num": "3"
},
{
"text": "Algorithms for word alignment in bitexts aim at discovering word pairs that are mutual translations. A straightforward approach is to estimate the likelihood that words are mutual translations by computing a similarity function based on a co-occurrence statistic, such as mutual information, Dice coefficient, or the \u03c7 2 test. The underlying assumption is that the association scores for different word pairs are independent of each other. Melamed (2000) shows that the assumption of independence leads to invalid word associations, and proposes an algorithm for inducing models of translational equivalence that outperform the models that are based solely on co-occurrence counts. His models employ the one-to-one assumption, which formalizes the observation that most words in bitexts are translated to a single word in the corresponding sentence. The algorithm, which is related to the expectation-maximization (EM) algorithm, iteratively re-estimates the likelihood scores which represent the probability that two word types are mutual translations. In the first step, the scores are initialized according to the G 2 statistic (Dunning, 1993) . Next, the likelihood scores are used to induce a set of one-to-one links between word tokens in the bitext. The links are determined by a greedy competitive linking algorithm, which proceeds to link pairs that have the highest likelihood scores. After the linking is completed, the link counts are used to re-estimate the likelihood scores, which in turn are applied to find a new set of links. The process is repeated until the translation model converges to the desired degree.",
"cite_spans": [
{
"start": 440,
"end": 454,
"text": "Melamed (2000)",
"ref_id": "BIBREF13"
},
{
"start": 1131,
"end": 1146,
"text": "(Dunning, 1993)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models of translational equivalence",
"sec_num": "3"
},
{
"text": "Melamed presents three translation-model estimation methods. Method A re-estimates the likelihood scores as the logarithm of the probability of jointly generating the pair of words u and v: score A\u00b4u v\u00b5 log links\u00b4u v\u00b5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models of translational equivalence",
"sec_num": "3"
},
{
"text": "\u2211 u \u00bc v \u00bc links\u00b4u \u00bc v \u00bc \u00b5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models of translational equivalence",
"sec_num": "3"
},
{
"text": "where links\u00b4u v\u00b5 denotes the number of links induced between u and v. where B\u00b4k n p\u00b5 denotes the probability of k being generated from a binomial distribution with parameters n and p. In Method C, bitext tokens are divided into classes, such as content words, function words, punctuation, etc., with the aim of producing more accurate translation models. The auxiliary parameters are estimated separately for each class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models of translational equivalence",
"sec_num": "3"
},
{
"text": "score C\u00b4u v Z class\u00b4u v\u00b5\u00b5 log B\u00b4links\u00b4u v\u00b5 cooc\u00b4u v\u00b5 \u03bb \u2022 Z \u00b5 B\u00b4links\u00b4u v\u00b5 cooc\u00b4u v\u00b5 \u03bb Z \u00b5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models of translational equivalence",
"sec_num": "3"
},
{
"text": "Thanks to its generality and symmetry, Melamed's parameter estimation process can be adapted to the problem of determining correspondences. The main idea is to induce a model of sound correspondence in a bilingual wordlist, in the same way as one induces a model of translational equivalence among words in a parallel corpus. After the model has converged, phoneme pairs with the highest likelihood scores represent the most likely correspondences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models of sound correspondence",
"sec_num": "4"
},
{
"text": "While there are strong similarities between the task of estimating translational equivalence of words and the task of determining recurrent correspondences of sounds, a number of important modifications to Melamed's original algorithm are necessary in order to make it applicable to the latter task. The modifications include the method of finding a good alignment, the handling of null links, and the method of computing the alignment score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models of sound correspondence",
"sec_num": "4"
},
{
"text": "For the task at hand, I employ a different method of aligning the segments in two corresponding sequences. In sentence translation, the alignment links frequently cross and it is not unusual for two words in different parts of sentences to correspond. In contrast, the processes that lead to link intersection in diachronic phonology, such as metathesis, are quite sporadic. The introduction of the no-crossing-links constraint on alignments not only leads to a dramatic reduction of the search space, but also makes it possible to replace the approximate competitive-linking algorithm of Melamed with a variant of the well-known dynamic programming algorithm (Wagner and Fischer, 1974; Kondrak, 2000) , which computes the optimal alignment between two strings in polynomial time.",
"cite_spans": [
{
"start": 660,
"end": 686,
"text": "(Wagner and Fischer, 1974;",
"ref_id": "BIBREF18"
},
{
"start": 687,
"end": 701,
"text": "Kondrak, 2000)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models of sound correspondence",
"sec_num": "4"
},
{
"text": "Null links in statistical machine translation are induced for words on one side of the bitext that have no clear counterparts on the other side of the bitext. Melamed's algorithm explicitly calculates the likelihood scores of null links for every word type occurring in a bitext. In diachronic phonology, phonological processes that lead to insertion or deletion of segments usually operate on individual words rather than on particular sounds across the language. Therefore, I model insertion and deletion by employing a constant indel penalty for unlinked segments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models of sound correspondence",
"sec_num": "4"
},
{
"text": "The alignment score between two words is computed by summing the number of induced links, and applying an indel penalty for each unlinked segment, with the exception of the segments beyond the rightmost link. The exception reflects the relative instability of word endings in the course of linguistic evolution. In order to avoid inducing links that are unlikely to represent recurrent sound correspondences, only pairs whose likelihood scores exceed a set threshold are linked. All correspondences above the threshold are considered to be equally valid. In the cases where more than one best alignment is found, each link is assigned a weight that is its average over the entire set of best alignments (for example, a link present in only one of two competing alignments receives the weight of 0 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models of sound correspondence",
"sec_num": "4"
},
{
"text": "The method described above has been implemented as a C++ program, named CORDI, which will soon be made publicly available. The program takes as input a bilingual wordlist and produces an ordered list of correspondences. A model for a 200-pair list usually converges after 3-5 iterations, which takes only a few seconds on a Sparc workstation. The user can choose between methods A, B, and C, described in Section 3, and an additional Method D. In Method C, phonemes are divided into two classes: non-syllabic (consonants and glides), and syllabic (vowels); links between phonemes belonging to different classes are not induced. Method D differs from Method C in that the syllabic phonemes do not participate in any links.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "5"
},
{
"text": "Adjustable parameters include the indel penalty ratio d and the minimum-strength correspondence threshold t. The parameter d fixes the ratio between the negative indel weight and the positive weight assigned to every induced link. (A lower ratio causes the program to be more adventurous in positing sparse links.) The parameter t controls the tradeoff between reliability and the number of links. In Method A, the value of t is the minimum number of phoneme links that have to be induced for the correspondence to be valid. In methods B, C, and D, the value of t implies a likelihood score threshold of t \u00a1 log \u03bb \u2022 \u03bb , which is a score achieved by a pair of phonemes that have t links out of t cooccurrences. In the experiments reported in Section 6, d was set to 0 15, and t was set to 1 (sufficient to reject all non-recurring correspondences). In Method D, where the lack of vowel links causes the linking constraints to be weaker, a higher value of t 3 was used. These parameter values were optimized on the development set described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "5"
},
{
"text": "The experiments in this section were performed using a well-known list of 200 basic meanings that are considered universal and relatively resistant to lexical replacement (Swadesh, 1952) . The Swadesh 200-word lists are widely used in linguistics and have been compiled for a large number of languages.",
"cite_spans": [
{
"start": 171,
"end": 186,
"text": "(Swadesh, 1952)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The data for experiments",
"sec_num": "6.1"
},
{
"text": "The development set consisted of three 200-word list pairs adapted from the Comparative Indoeuropean Data Corpus (Dyen et al., 1992) . The corpus contains the 200-word lists for a number of Indoeuropean languages together with cognation judgments made by a renowned historical linguist Isidore Dyen. Unfortunately, the words are represented in the Roman alphabet without any diacritical marks, which makes them unsuitable for automatic phonetic analysis. The Polish-Russian, Spanish-Romanian, and Italian-Serbocroatian were selected because they represent three different levels of relatedness (73.5%, 58.5%, and 25.3% of cognate pairs, respectively), and also because they have relatively transparent grapheme-to-phoneme conversion rules. They were transcribed into a phonetic notation by means of Perl scripts and then stemmed and corrected manually.",
"cite_spans": [
{
"start": 113,
"end": 132,
"text": "(Dyen et al., 1992)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The data for experiments",
"sec_num": "6.1"
},
{
"text": "The test set consisted of five 200-word lists representing English, German, French, Latin, and Albanian, compiled by Kessler (2001) As the lists contain rich phonetic and morphological information, the stemmed forms were automatically converted from the XML format with virtually no extra pro-cessing. The word pairs classified by Kessler as doubtful cognates were assumed to be unrelated.",
"cite_spans": [
{
"start": 117,
"end": 131,
"text": "Kessler (2001)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The data for experiments",
"sec_num": "6.1"
},
{
"text": "Experiments show that CORDI has little difficulty in determining correspondences given a set of cognate pairs (Kondrak, 2002) In order to test CORDI's ability to determine correspondences in noisy data, Method D was applied to the 200-word lists for English and Latin. Only 29% of word pairs are actually cognate; the remaining 71% of the pairs are unrelated lexemes. The top ten correspondences discovered by the program are shown in Table 2 . Remarkably, all but one are valid. In contrast, only four of the top ten phoneme matchings picked up by the \u03c7 2 statistic are valid correspondences (the validity judgements are my own).",
"cite_spans": [
{
"start": 110,
"end": 125,
"text": "(Kondrak, 2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 435,
"end": 442,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Determination of correspondences in word pairs",
"sec_num": "6.2"
},
{
"text": "The quality of correspondences produced by CORDI is difficult to validate, quantify, and compare with the results of alternative approaches. However, it is possible to evaluate the correspondences indirectly by using them to identify cognates. The likelihood of cognation of a pair of words increases with the number of correspondences that they contain. Since CORDI explicitly posits correspondence links between words, the likelihood of cognation can be estimated by simply dividing the number of induced links by the length of the words that are being compared. A minimum-length parameter can be set in order to avoid computing cognation estimates for very short words, which tend to be unreliable. r i word pair cognate? i p i 1 /h rt/:/kord/ yes 1 1.00 2 /h t/:/kalid/ no 3 /sn\u014d/:/niw/ yes 2 0.66 Table 3 : An example ranking of cognate pairs.",
"cite_spans": [],
"ref_spans": [
{
"start": 802,
"end": 809,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Identification of cognates in word pairs",
"sec_num": "6.3"
},
{
"text": "The evaluation method for cognate identification algorithms adopted in this section is to apply them to a bilingual wordlist and order the pairs according to their scores (refer to Table 3 ). The ranking is then evaluated against a gold standard by computing the n-point average precision, a generalization of the 11-point average precision, where n is the total number of cognate pairs in the list. The n-point average precision is obtained by taking the average of n precision values that are calculated for each point in the list where we find a cognate pair:",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 188,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Identification of cognates in word pairs",
"sec_num": "6.3"
},
{
"text": "p i i r i i 1 n,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identification of cognates in word pairs",
"sec_num": "6.3"
},
{
"text": "where i is the number of the cognate pair counting from the top of the list produced by the algorithm, and r i is the rank of this cognate pair among all word pairs. The n-point precision of the ranking in Table 3 Table 5 : Average cognate identification precision on the test set for various methods. Table 4 compares the average precision achieved by methods A, B, C, and D on the development set. The cognation judgments from the Comparative Indoeuropean Data Corpus served as the gold standard.",
"cite_spans": [],
"ref_spans": [
{
"start": 206,
"end": 213,
"text": "Table 3",
"ref_id": null
},
{
"start": 214,
"end": 221,
"text": "Table 5",
"ref_id": null
},
{
"start": 302,
"end": 309,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Identification of cognates in word pairs",
"sec_num": "6.3"
},
{
"text": "All four methods proposed in this paper as well as other cognate identification programs were uniformly applied to the test set representing five Indoeuropean languages. Apart from the English-German and the French-Latin pairs, all remaining language pairs are quite challenging for a cognate identification program. In many cases, the goldstandard cognate judgments distill the findings of decades of linguistic research. In fact, for some of those pairs, Kessler finds it difficult to show by statistical techniques that the surface regularities are unlikely to be due to chance. Nevertheless, in order to avoid making subjective choices, CORDI was evaluated on all possible language pairs in Kessler's set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identification of cognates in word pairs",
"sec_num": "6.3"
},
{
"text": "Two programs mentioned in Section 2, COG-NATE and JAKARTA, were also applied to the test set. The source code of JAKARTA was obtained directly from the author and slightly modified according to his instructions in order to make it recognize additional phonemes. Word pairs were ordered according to the confidence scores in the case of COG-NATE, and according to the edit distances in the case of JAKARTA. Since the other two programs do not impose any length constraints on words, the minimum-length parameter was not used in the experiments described here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identification of cognates in word pairs",
"sec_num": "6.3"
},
{
"text": "The results on the test set are shown in Table 5 . The best result for each language pair is underlined. The performance of COGNATE and JAKARTA is quite similar, even though they represent two radically different approaches to cognate identification. On average, methods B, C, and D outperform both comparison programs. On closely related languages, Method B, with its relatively unconstrained linking, achieves the highest precision. Method D, which considers only consonants, is the best on fairly remote languages, where vowel correspondences tend to be weak. The only exception is the extremely difficult Albanian-English pair, where the relative ordering of methods seems to be accidental. As expected, Method A is outperformed by methods that employ an explicit noise model. However, in spite of its extra complexity, Method C is not consistently better than Method B, perhaps because of its inability to detect important vowel-consonant correspondences, such as the ones between French nasal vowels and Latin /n/.",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 48,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Identification of cognates in word pairs",
"sec_num": "6.3"
},
{
"text": "I have presented a novel approach to the determination of correspondences in bilingual wordlists. The results of experiments indicate that the approach is robust enough to handle a substantial amount of noise that is introduced by unrelated word pairs. CORDI does well even when the number of non-cognate pairs is more than double the number of cognate pairs. When tested on the cognate-identification task, CORDI achieves substantially higher precision than comparable programs. The correspondences are explicitly posited, which means that, unlike in some statistical approaches, they can be verified by examining individual cognate pairs. In contrast with approaches that assume a rigid alignment based on the syl-labic structure, the models presented here can link phonemes in any word position.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "7"
},
{
"text": "Currently, I am working on the incorporation of complex correspondences into the cognate identification algorithm by employing Melamed's (1997) algorithm for discovering non-compositional compounds in parallel data. Such an extension would overcome the limitation of the one-to-one model, in which links are induced only between individual phonemes. Other possible extensions include taking into account the phonological context of correspondences, combining the correspondence-based approach with phonetic-based approaches, and identifying correspondences and cognates directly in dictionary-type data.",
"cite_spans": [
{
"start": 127,
"end": 143,
"text": "Melamed's (1997)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "7"
},
{
"text": "The results presented here prove that the techniques developed in the context of statistical machine translation can be successfully applied to a problem in diachronic phonology. The transfer of methods and insights should also be possible in the other direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "Thanks to Graeme Hirst, Radford Neal, and Suzanne Stevenson for helpful comments, to Michael Oakes for assistance with JAKARTA, and to Gemma Enriquez for helping with the experimental evaluation of COGNATE. This research was supported by the Natural Sciences and Engineering Research Council of Canada.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Statistical machine translation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Curin",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Jahr",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Melamed",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Purdy",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Al-Onaizan, J. Curin, M. Jahr, K. Knight, J. Laf- ferty, D. Melamed, F. Och, D. Purdy, N. Smith, and D. Yarowsky. 1999. Statistical machine translation. Technical report, Johns Hopkins University.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Accurate methods for the statistics of surprise and coincidence",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Dunning",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "61--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguis- tics, 19(1):61-74.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An Indoeuropean classification: A lexicostatistical experiment",
"authors": [
{
"first": "Isidore",
"middle": [],
"last": "Dyen",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"B"
],
"last": "Kruskal",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Black",
"suffix": ""
}
],
"year": 1992,
"venue": "Transactions of the American Philosophical Society",
"volume": "",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isidore Dyen, Joseph B. Kruskal, and Paul Black. 1992. An Indoeuropean classification: A lexicosta- tistical experiment. Transactions of the American Philosophical Society, 82(5). Word lists available at http://www.ldc.upenn.edu/ldc/service/comp-ie.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An algorithm for identifying cognates in bilingual wordlists and its applicability to machine translation",
"authors": [
{
"first": "B",
"middle": [
"M"
],
"last": "Jacques",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guy",
"suffix": ""
}
],
"year": 1994,
"venue": "MS-DOS executable",
"volume": "1",
"issue": "",
"pages": "35--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacques B. M. Guy. 1994. An algorithm for identify- ing cognates in bilingual wordlists and its applicability to machine translation. Journal of Quantitative Lin- guistics, 1(1):35-42. MS-DOS executable available at http://garbo.uwasa.fi.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The Significance of Word Lists",
"authors": [
{
"first": "Brett",
"middle": [],
"last": "Kessler",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brett Kessler. 2001. The Significance of Word Lists. Stanford: CSLI Publications. Word lists available at http://spell.psychology.wayne.edu/ bkessler.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Knowledge sources for word-level translation models",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "27--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Kevin Knight. 2001. Knowledge sources for word-level translation models. In Pro- ceedings of the 2001 Conference on Empirical Meth- ods in Natural Language Processing, pages 27-35.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A new algorithm for the alignment of phonetic sequences",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of NAACL 2000: 1st Meeting of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "288--295",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Kondrak. 2000. A new algorithm for the alignment of phonetic sequences. In Proceedings of NAACL 2000: 1st Meeting of the North American Chapter of the Association for Computational Lin- guistics, pages 288-295.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Identifying cognates by phonetic and semantic similarity",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of NAACL 2001: 2nd Meeting of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "103--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Kondrak. 2001. Identifying cognates by pho- netic and semantic similarity. In Proceedings of NAACL 2001: 2nd Meeting of the North American Chapter of the Association for Computational Lin- guistics, pages 103-110.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Algorithms for Language Reconstruction",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Kondrak. 2002. Algorithms for Language Re- construction. Ph.D. thesis, University of Toronto. Available at http://www.cs.toronto.edu/ kondrak.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The reconstruction engine: a computer implementation of the comparative method",
"authors": [
{
"first": "B",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Martine",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mazaudon",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "",
"pages": "381--417",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John B. Lowe and Martine Mazaudon. 1994. The re- construction engine: a computer implementation of the comparative method. Computational Linguistics, 20:381-417.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Multipath translation lexicon induction via bridge languages",
"authors": [
{
"first": "Gideon",
"middle": [
"S"
],
"last": "Mann",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of NAACL 2001: 2nd Meeting of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "151--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gideon S. Mann and David Yarowsky. 2001. Multipath translation lexicon induction via bridge languages. In Proceedings of NAACL 2001: 2nd Meeting of the North American Chapter of the Association for Com- putational Linguistics, pages 151-158.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic discovery of noncompositional compounds in parallel data",
"authors": [
{
"first": "I",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Second Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "97--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Dan Melamed. 1997. Automatic discovery of non- compositional compounds in parallel data. In Pro- ceedings of the Second Conference on Empirical Methods in Natural Language Processing, pages 97- 108.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bitext maps and alignment via pattern recognition",
"authors": [
{
"first": "I",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "1",
"pages": "107--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Dan Melamed. 1999. Bitext maps and alignment via pattern recognition. Computational Linguistics, 25(1):107-130.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Models of translational equivalence among words",
"authors": [
{
"first": "I",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Melamed",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "2",
"pages": "221--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Dan Melamed. 2000. Models of translational equiv- alence among words. Computational Linguistics, 26(2):221-249.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Computer estimation of vocabulary in protolanguage from word lists in four daughter languages",
"authors": [
{
"first": "P",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Oakes",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of Quantitative Linguistics",
"volume": "7",
"issue": "3",
"pages": "233--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael P. Oakes. 2000. Computer estimation of vocab- ulary in protolanguage from word lists in four daugh- ter languages. Journal of Quantitative Linguistics, 7(3):233-243.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Using cognates to align sentences in bilingual corpora",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "George",
"middle": [
"F"
],
"last": "Foster",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Isabelle",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Fourth International Conference on Theoretical and Methodological Issues in Machine Translation",
"volume": "",
"issue": "",
"pages": "67--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Simard, George F. Foster, and Pierre Isabelle. 1992. Using cognates to align sentences in bilingual corpora. In Proceedings of the Fourth International Conference on Theoretical and Methodological Is- sues in Machine Translation, pages 67-81, Montreal, Canada.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Lexico-statistical dating of prehistoric ethnic contacts",
"authors": [
{
"first": "Morris",
"middle": [],
"last": "Swadesh",
"suffix": ""
}
],
"year": 1952,
"venue": "Proceedings of the American Philosophical Society",
"volume": "96",
"issue": "",
"pages": "452--463",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morris Swadesh. 1952. Lexico-statistical dating of pre- historic ethnic contacts. Proceedings of the American Philosophical Society, 96:452-463.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic construction of weighted string similarity measures",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 1999. Automatic construction of weighted string similarity measures. In Proceedings of the Joint SIGDAT Conference on Empirical Meth- ods in Natural Language Processing and Very Large Corpora, College Park, Maryland.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The string-to-string correction problem",
"authors": [
{
"first": "A",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Wagner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fischer",
"suffix": ""
}
],
"year": 1974,
"venue": "Journal of the Association for Computing Machinery",
"volume": "21",
"issue": "1",
"pages": "168--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert A. Wagner and Michael J. Fischer. 1974. The string-to-string correction problem. Journal of the As- sociation for Computing Machinery, 21(1):168-173.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The similarity of word alignment in bitexts and phoneme alignment between cognates."
},
"TABREF1": {
"num": null,
"text": "Note that the co-occurrence counts of u and v are not used for the re-estimation, In Method B, an explicit noise model with auxiliary parameters \u03bb \u2022 and \u03bb is constructed in order to improve the estimation of likelihood scores. \u03bb \u2022 is a probability that a link is induced between two cooccurring words that are mutual translations, while \u03bb is a probability that a link is induced between two co-occurring words that are not mutual translations. Ideally, \u03bb \u2022 should be close to one and \u03bb should be close to zero. The actual values of the two parameters are calculated by the maximum likelihood estimation. Let cooc\u00b4u v\u00b5 be the number of co-occurrences of u and v. The score function is defined as: score B\u00b4u v\u00b5 log B\u00b4links\u00b4u v\u00b5 cooc\u00b4u v\u00b5 \u03bb \u2022 \u00b5 B\u00b4links\u00b4u v\u00b5 cooc\u00b4u v\u00b5 \u03bb \u00b5",
"type_str": "table",
"content": "<table/>",
"html": null
},
"TABREF3": {
"num": null,
"text": "English-Latin correspondences discovered by CORDI in noisy synonym data.",
"type_str": "table",
"content": "<table/>",
"html": null
},
"TABREF5": {
"num": null,
"text": "Average cognate identification precision on the development set for various methods.",
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Languages</td><td>Proportion</td><td>COGNATE</td><td>JAKARTA</td><td/><td colspan=\"2\">Method</td><td/></tr><tr><td/><td/><td>of cognates</td><td/><td/><td>A</td><td>B</td><td>C</td><td>D</td></tr><tr><td>English</td><td>German</td><td>.590</td><td>.878</td><td>.888</td><td>.936</td><td>.957</td><td>.952</td><td>.950</td></tr><tr><td>French</td><td>Latin</td><td>.560</td><td>.867</td><td>.787</td><td>.843</td><td>.914</td><td>.838</td><td>.866</td></tr><tr><td>English</td><td>Latin</td><td>.290</td><td>.590</td><td>.447</td><td>.584</td><td>.641</td><td>.749</td><td>.853</td></tr><tr><td>German</td><td>Latin</td><td>.290</td><td>.532</td><td>.518</td><td>.617</td><td>.723</td><td>.736</td><td>.857</td></tr><tr><td>English</td><td>French</td><td>.275</td><td>.324</td><td>.411</td><td>.482</td><td>.528</td><td>.545</td><td>.559</td></tr><tr><td>French</td><td>German</td><td>.245</td><td>.390</td><td>.406</td><td>.347</td><td>.502</td><td>.487</td><td>.528</td></tr><tr><td>Albanian</td><td>Latin</td><td>.195</td><td>.449</td><td>.455</td><td>.403</td><td>.432</td><td>.568</td><td>.606</td></tr><tr><td>Albanian</td><td>French</td><td>.165</td><td>.306</td><td>.432</td><td>.249</td><td>.292</td><td>.319</td><td>.437</td></tr><tr><td>Albanian</td><td>German</td><td>.125</td><td>.277</td><td>.248</td><td>.156</td><td>.177</td><td>.154</td><td>.312</td></tr><tr><td>Albanian</td><td>English</td><td>.100</td><td>.225</td><td>.227</td><td>.302</td><td>.373</td><td>.319</td><td>.196</td></tr><tr><td colspan=\"2\">Average</td><td>.283</td><td>.484</td><td>.482</td><td>.492</td><td>.554</td><td>.567</td><td>.616</td></tr></table>",
"html": null
}
}
}
}