source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
It rewards matches of n-gram sequences, but measures only at most indirectly overall grammatical coherence.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
(b) After they were released...
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
The BerkeleyLM direct-mapped cache is in principle faster than caches implemented by RandLM and by IRSTLM, so we may write a C++ equivalent implementation as future work.
All the texts were annotated by two people.
0
Clearly this poses a number of research challenges, though, such as the applicability of tag sets across different languages.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Multiple features can be used for the same token.
This assumption, however, is not inherent to type-based tagging models.
0
8 57.3 +F EA TS be st me dia n 50.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
As mentioned above, it is not obvious how to apply Daum´e’s approach to multinomials, which do not have a mechanism for combining split features.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
The second model (+PRIOR) utilizes the independent prior over type-level tag assignments P (T |ψ).
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
It is also true of the adaptation of the Collins parser for Czech (Collins et al., 1999) and the finite-state dependency parser for Turkish by Oflazer (2003).
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
We allow any number of bits from 2 to 25, unlike IRSTLM (8 bits) and BerkeleyLM (17−20 bits).
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
The sign test checks, how likely a sample of better and worse BLEU scores would have been generated by two systems of equal performance.
It is probably the first analysis of Arabic parsing of this kind.
0
73 81.
The corpus was annoted with different linguitic information.
0
Having explained the various layers of annotation in PCC, we now turn to the question what all this might be good for.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
The system of Berg-Kirkpatrick et al.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
The details of the transformation procedure are slightly different depending on the encoding schemes: d↑h let the linear head be the syntactic head). target arc must have the form wl −→ wm; if no target arc is found, Head is used as backoff. must have the form wl −→ wm and no outgoing arcs of the form wm p'↓ −→ wo; no backoff.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
The iw all map variant uses a non-0 y weight on a uniform prior in p,,(s t), and outperforms a version with y = 0 (iw all) and the “flattened” variant described in section 3.2.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
37.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
Word Re-ordering and DP-based Search in Statistical Machine Translation
Replacing this with a ranked evaluation seems to be more suitable.
0
Annotators argued for the importance of having correct and even multiple references.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
A position is presented by the word at that position.
A beam search concept is applied as in speech recognition.
0
am 11.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
This suggests that the backoff model is as reasonable a model as we can use in the absence of further information about the expected cost of a plural form.
There is no global pruning.
0
Then, the German infinitive 'besuchen' and the negation particle 'nicht' are translated.
They have made use of local and global features to deal with the instances of same token in a document.
0
However, 1 http://maxent.sourceforge.net 3.2 Testing.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
21 84.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
The work reported here is closely related to [Ha- segawa et al. 04].
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
4.5 Transliterations of Foreign Words.
This paper conducted research in the area of automatic paraphrase discovery.
0
Here, an NE instance pair is any pair of NEs separated by at most 4 syntactic chunks; for example, “IBM plans to acquire Lotus”.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
At each iteration the algorithm increases the number of rules, while maintaining a high level of agreement between the spelling and contextual decision lists.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-8) (1-6) lcc (1-6) (1-7) (1-4) utd (1-7) (1-6) (2-7) upc-mr (1-8) (1-6) (1-7) nrc (1-7) (2-6) (8) ntt (1-8) (2-8) (1-7) cmu (3-7) (4-8) (2-7) rali (5-8) (3-9) (3-7) systran (9) (8-9) (10) upv (10) (10) (9) Spanish-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-6) (1-5) ntt (1-7) (1-8) (1-5) lcc (1-8) (2-8) (1-4) utd (1-8) (2-7) (1-5) nrc (2-8) (1-9) (6) upc-mr (1-8) (1-6) (7) uedin-birch (1-8) (2-10) (8) rali (3-9) (3-9) (2-5) upc-jg (7-9) (6-9) (9) upv (10) (9-10) (10) German-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) uedin-phi (1-2) (1) (1) lcc (2-7) (2-7) (2) nrc (2-7) (2-6) (5-7) utd (3-7) (2-8) (3-4) ntt (2-9) (2-8) (3-4) upc-mr (3-9) (6-9) (8) rali (4-9) (3-9) (5-7) upc-jmc (2-9) (3-9) (5-7) systran (3-9) (3-9) (10) upv (10) (10) (9) Figure 7: Evaluation of translation to English on in-domain test data 112 English-French (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) nrc (1-5) (1-5) (1-6) upc-mr (1-4) (1-5) (1-6) upc-jmc (1-6) (1-6) (1-5) systran (2-7) (1-6) (7) utd (3-7) (3-7) (3-6) rali (1-7) (2-7) (1-6) ntt (4-7) (4-7) (1-5) English-Spanish (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) ms (1-5) (1-7) (7-8) upc-mr (1-4) (1-5) (1-4) utd (1-5) (1-6) (1-4) nrc (2-7) (1-6) (5-6) ntt (3-7) (1-6) (1-4) upc-jmc (2-7) (2-7) (1-4) rali (5-8) (6-8) (5-6) uedin-birch (6-9) (6-10) (7-8) upc-jg (9) (8-10) (9) upv (9-10) (8-10) (10) English-German (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-5) (3-5) ntt (1-5) (2-6) (1-3) upc-jmc (1-5) (1-4) (1-3) nrc (2-4) (1-5) (4-5) rali (3-6) (2-6) (1-4) systran (5-6) (3-6) (7) upv (7) (7) (6) Figure 8: Evaluation of translation from English on in-domain test data 113 French-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-5) (1-8) (1-4) cmu (1-8) (1-9) (4-7) systran (1-8) (1-7) (9) lcc (1-9) (1-9) (1-5) upc-mr (2-8) (1-7) (1-3) utd (1-9) (1-8) (3-7) ntt (3-9) (1-9) (3-7) nrc (3-8) (3-9) (3-7) rali (4-9) (5-9) (8) upv (10) (10) (10) Spanish-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-2) (1-6) (1-3) uedin-birch (1-7) (1-6) (5-8) nrc (2-8) (1-8) (5-7) ntt (2-7) (2-6) (3-4) upc-mr (2-8) (1-7) (5-8) lcc (4-9) (3-7) (1-4) utd (2-9) (2-8) (1-3) upc-jg (4-9) (7-9) (9) rali (4-9) (6-9) (6-8) upv (10) (10) (10) German-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1-4) (1-4) (7-9) uedin-phi (1-6) (1-7) (1) lcc (1-6) (1-7) (2-3) utd (2-7) (2-6) (4-6) ntt (1-9) (1-7) (3-5) nrc (3-8) (2-8) (7-8) upc-mr (4-8) (6-8) (4-6) upc-jmc (4-8) (3-9) (2-5) rali (8-9) (8-9) (8-9) upv (10) (10) (10) Figure 9: Evaluation of translation to English on out-of-domain test data 114 English-French (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1) (1) upc-jmc (2-5) (2-4) (2-6) upc-mr (2-4) (2-4) (2-6) utd (2-6) (2-6) (7) rali (4-7) (5-7) (2-6) nrc (4-7) (4-7) (2-5) ntt (4-7) (4-7) (3-6) English-Spanish (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-6) (1-2) ms (1-7) (1-8) (6-7) utd (2-6) (1-7) (3-5) nrc (1-6) (2-7) (3-5) upc-jmc (2-7) (1-6) (3-5) ntt (2-7) (1-7) (1-2) rali (6-8) (4-8) (6-8) uedin-birch (6-10) (5-9) (7-8) upc-jg (8-9) (9-10) (9) upv (9) (8-9) (10) English-German (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1-2) (1-6) upc-mr (2-3) (1-3) (1-5) upc-jmc (2-3) (3-6) (1-6) rali (4-6) (4-6) (1-6) nrc (4-6) (2-6) (2-6) ntt (4-6) (3-5) (1-6) upv (7) (7) (7) Figure 10: Evaluation of translation from English on out-of-domain test data 115 French-English In domain Out of Domain Adequacy Adequacy 0.3 0.3 • 0.2 0.2 0.1 0.1 -0.0 -0.0 -0.1 -0.1 -0.2 -0.2 -0.3 -0.3 -0.4 -0.4 -0.5 -0.5 -0.6 -0.6 -0.7 -0.7 •upv -0.8 -0.8 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 •upv •systran upcntt • rali upc-jmc • cc Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upv -0.5 •systran •upv upc -jmc • Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 • • • td t cc upc- • rali 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 Figure 11: Correlation between manual and automatic scores for French-English 116 Spanish-English Figure 12: Correlation between manual and automatic scores for Spanish-English -0.3 -0.4 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 •upv -0.4 •upv -0.3 In Domain •upc-jg Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 Out of Domain •upc-jmc •nrc •ntt Adequacy upc-jmc • • •lcc • rali • •rali -0.7 -0.5 -0.6 •upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 • •rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 ntt • upc-mr •lcc •utd •upc-jg •rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upc-jmc • uedin-birch -0.5 -0.5 •upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 117 In Domain Out of Domain Adequacy Adequacy German-English 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 lcc • upc-jmc •systran •upv Fluency •ula •upc-mr •lcc 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •systran •upv •uedin-phi -jmc •rali •systran -0.3 -0.4 -0.5 -0.6 •upv 12 13 14 15 16 17 18 19 20 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 Fluency uedin-phi • • •utd •upc-jmc •upc-mr 0.4 •rali -0.3 -0.4 -0.5 •upv 12 13 14 15 16 17 18 19 20 0.3 0.2 0.1 -0.0 -0.1 -0.2 English-French In Domain Out of Domain Adequacy Adequacy .
Here we present two algorithms.
0
There are two differences between this method and the DL-CoTrain algorithm: spelling and contextual features, alternating between labeling and learning with the two types of features.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
We define an ATM, M, recognizing a language generated by a grammar, G, having the properties discussed in Section 43.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
We incur some additional memory cost due to storing state in each hypothesis, though this is minimal compared with the size of the model itself.
A beam search concept is applied as in speech recognition.
0
For each source word f, the list of its possible translations e is sorted according to p(fje) puni(e), where puni(e) is the unigram probability of the English word e. It is suÆcient to consider only the best 50 words.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
Finally, we thank Kuzman Ganchev and the three anonymous reviewers for helpful suggestions and comments on earlier drafts of this paper.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
The only way to handle such phenomena within the framework described here is simply to expand out the reduplicated forms beforehand, and incorporate the expanded forms into the lexical transducer.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Daum´e (2007) applies a related idea in a simpler way, by splitting features into general and domain-specific versions.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
At the phrasal level, we remove all function tags and traces.
Here both parametric and non-parametric models are explored.
0
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
It is probably the first analysis of Arabic parsing of this kind.
0
We compare the manually annotated grammar, which we incorporate into the Stanford parser, to both the Berkeley (Petrov et al., 2006) and Bikel (Bikel, 2004) parsers.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Our results suggest that it is possible to learn accurate POS taggers for languages which do not have any annotated data, but have translations into a resource-rich language.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
There has been a lot of research on such lexical relations, along with the creation of resources such as WordNet.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
All experiments use ATB parts 1–3 divided according to the canonical split suggested by Chiang et al.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Several systems propose statistical methods for handling unknown words (Chang et al. 1992; Lin, Chiang, and Su 1993; Peng and Chang 1993).
The texts were annotated with the RSTtool.
0
information structure.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
When using the segmentation pruning (using HSPELL) for unseen tokens, performance improves for all tasks as well.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Their best model yields 44.5% one-to-one accuracy, compared to our best median 56.5% result.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
2) An improved language model, which takes into account syntactic structure, e.g. to ensure that a proper English verbgroup is generated.
This assumption, however, is not inherent to type-based tagging models.
0
La ng ua ge # To ke ns # W or d Ty pe s # Ta gs E ng lis h D a ni s h D u tc h G e r m a n P or tu g u e s e S p a ni s h S w e di s h 1 1 7 3 7 6 6 9 4 3 8 6 2 0 3 5 6 8 6 9 9 6 0 5 2 0 6 6 7 8 8 9 3 3 4 1 9 1 4 6 7 4 9 2 0 6 1 8 3 5 6 2 8 3 9 3 7 2 3 2 5 2 8 9 3 1 1 6 4 5 8 2 0 0 5 7 4 5 2 5 1 2 5 4 2 2 4 7 4 1 Table 2: Statistics for various corpora utilized in experiments.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
We have presented two general approaches to studying parser combination: parser switching and parse hybridization.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
We propose a limit of 70 words for Arabic parsing evaluations.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Since the inclusion of out-ofdomain test data was a very late decision, the participants were not informed of this.
Two general approaches are presented and two combination techniques are described for each approach.
0
The first shows how constituent features and context do not help in deciding which parser to trust.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
Although the ultimate goal of the Verbmobil project is the translation of spoken language, the input used for the translation experiments reported on in this paper is the (more or less) correct orthographic transcription of the spoken sentences.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
Precision is the portion of hypothesized constituents that are correct and recall is the portion of the Treebank constituents that are hypothesized.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
This vector tx is constructed for every word in the foreign vocabulary and will be used to provide features for the unsupervised foreign language POS tagger.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
Since 170 annotated texts constitute a fairly small training set, Reitter found that an overall recognition accuracy of 39% could be achieved using his method.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Out of those 15 links, 4 are errors, namely “buy - pay”, “acquire - pay”, “purchase - stake” “acquisition - stake”.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
For example, suppose one is building a ITS system for Mandarin Chinese.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
Apart from MERT difficulties, a conceptual problem with log-linear combination is that it multiplies feature probabilities, essentially forcing different features to agree on high-scoring candidates.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
We can 5 Recall that precision is defined to be the number of correct hits divided by the total number of items.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
1
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
This is to allow for fair comparison between the statistical method and GR, which is also purely dictionary-based.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
The code is opensource, has minimal dependencies, and offers both C++ and Java interfaces for integration.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
First, the model assumes independence between the first and second hanzi of a double given name.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
We group the features used into feature groups.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
0 57.2 43.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
The authors provided us with a ratio between TPT and SRI under different conditions. aLossy compression with the same weights. bLossy compression with retuned weights. ditions make the value appropriate for estimating repeated run times, such as in parameter tuning.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
If the log backoff of wnf is also zero (it may not be in filtered models), then wf should be omitted from the state.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
Making the ten judgements (2 types for 5 systems) takes on average 2 minutes.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
In this case we are interested in finding' the maximum probability parse, ri, and Mi is the set of relevant (binary) parsing decisions made by parser i. ri is a parse selected from among the outputs of the individual parsers.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Second, we show that although the Penn Arabic Treebank is similar to other tree- banks in gross statistical terms, annotation consistency remains problematic.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Email: gale@research.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
On several languages, we report performance exceeding that of more complex state-of-the art systems.1
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
11 www.ling.unipotsdam.de/sfb/projekt a3.php 12 This step was carried out in the course of the diploma thesis work of David Reitter (2003), which de serves special mention here.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
But for eign learners are often surprised by the verbless predications that are frequently used in Arabic.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
All our results are obtained by using only the official training data provided by the MUC conferences.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
2.1.1 Lexical Seeding It is generally not safe to assume that multiple occurrences of a noun phrase refer to the same entity.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
If (wi, r, wj) E A, we say that wi is the head of wj and wj a dependent of wi.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
We confirm the finding by Callison-Burch et al. (2006) that the rule-based system of Systran is not adequately appreciated by BLEU.
This paper talks about Pseudo-Projective Dependency Parsing.
0
It is also true of the adaptation of the Collins parser for Czech (Collins et al., 1999) and the finite-state dependency parser for Turkish by Oflazer (2003).
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
Note that since all English vertices were extracted from the parallel text, we will have an initial label distribution for all vertices in Ve.
All the texts were annotated by two people.
0
2.2 Syntactic structure.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
We have shown that, at least given independent human judgments, this is not the case, and that therefore such simplistic measures should be mistrusted.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
We tried two versions of our graph-based approach: feature after the first stage of label propagation (Eq.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Values in the trie are minimally sized at the bit level, improving memory consumption over trie implementations in SRILM, IRSTLM, and BerkeleyLM.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
The domains are the linguistic spans that are to receive an IS-partitioning, and the units are the (smaller) spans that can play a role as a constituent of such a partitioning.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
In Table 7 we give results for several evaluation metrics.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Manual annotation results in human in- terpretable grammars that can inform future tree- bank annotation decisions.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Unsupervised Part-of-Speech Tagging with Bilingual Graph-Based Projections
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
For the inverted alignment probability p(bijbi􀀀1; I; J), we drop the dependence on the target sentence length I. 2.2 Word Joining.
There is no global pruning.
0
On the other hand, only very restricted reorderings are necessary, e.g. for the translation direction from Table 2: Coverage set hypothesis extensions for the IBM reordering.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
There is a fairly large body of work on SMT adaptation.
There is no global pruning.
0
(1), Pr(eI 1) is the language model, which is a trigram language model in this case.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
In the small window on the left, search queries can be entered, here one for an NP that has been annotated on the co-reference layer as bridging.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Specifically, we assume each word type W consists of feature-value pairs (f, v).
All the texts were annotated by two people.
0
Like in the co-reference annotation, G¨otze’s proposal has been applied by two annotators to the core corpus but it has not been systematically evaluated yet.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
It is well-known that constituency parsing models designed for English often do not generalize easily to other languages and treebanks.1 Explanations for this phenomenon have included the relative informativeness of lexicalization (Dubey and Keller, 2003; Arun and Keller, 2005), insensitivity to morphology (Cowan and Collins, 2005; Tsarfaty and Sima’an, 2008), and the effect of variable word order (Collins et al., 1999).
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Finally, we model the probability of a new transliterated name as the product of PTN and PTN(hanzi;) for each hanzi; in the putative name.13 The foreign name model is implemented as an WFST, which is then summed with the WFST implementing the dictionary, morpho 13 The current model is too simplistic in several respects.