source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
Third, all remaining anaphora are evaluated by 11 different knowledge sources: the four contextual role knowledge sources just described and seven general knowledge sources.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
LCFRS's share several properties possessed by the class of mildly context-sensitive formalisms discussed by Joshi (1983/85).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
How do additional ambiguities caused by devocalization affect statistical learning?
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Find keywords for each NE pair When we look at the contexts for each domain, we noticed that there is one or a few important words which indicate the relation between the NEs (for example, the word “unit” for the phrase “a unit of”).
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
We are therefore applying a different method, which has been used at the 2005 DARPA/NIST evaluation.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Sentences and systems were randomly selected and randomly shuffled for presentation.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
On the other hand, when all systems produce muddled output, but one is better, and one is worse, but not completely wrong, a judge is inclined to hand out judgements of 4, 3, and 2.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Here, all tokeninternal collocations of tags unseen in our training data are pruned away.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
This left 962 examples, of which 85 were noise.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Frontier nodes are annotated by zero arty functions corresponding to elementary structures.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The CoBoost algorithm just described is for the case where there are two labels: for the named entity task there are three labels, and in general it will be useful to generalize the CoBoost algorithm to the multiclass case.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Different annotations of the same text are mapped into the same data structure, so that search queries can be formulated across annotation levels.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
In the following, we assume that this word joining has been carried out.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
We attain these results using several optimizations: hashing, custom lookup tables, bit-level packing, and state for left-to-right query patterns.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
The NE tagger is a rule-based system with 140 NE categories [Sekine et al. 2004].
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Judges where excluded from assessing the quality of MT systems that were submitted by their institution.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Roughly speaking, previous work can be divided into three categories, namely purely statistical approaches, purely lexi­ cal rule-based approaches, and approaches that combine lexical information with sta­ tistical information.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
The first issue relates to the completeness of the base lexicon.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
The ratio of buckets to entries is controlled by space multiplier m > 1.
They found replacing it with a ranked evaluation to be more suitable.
0
We are therefore applying a different method, which has been used at the 2005 DARPA/NIST evaluation.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Since all long sentence translation are somewhat muddled, even a contrastive evaluation between systems was difficult.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Dynamic programming efficiently scores many hypotheses by exploiting the fact that an N-gram language model conditions on at most N − 1 preceding words.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Keyword detection error Even if a keyword consists of a single word, there are words which are not desirable as keywords for a domain.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
All three models evaluated in this paper incorrectly analyze the constituent as iDafa; none of the models attach the attributive adjectives properly.
There are clustering approaches that assign a single POS tag to each word type.
0
1 1 0.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
), which precludes a single universal approach to adaptation.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
(a) I f f fi * fi :1 }'l ij 1§: {1M m m s h e n 3 m e 0 shi2 ho u4 wo 3 cai2 ne ng 2 ke4 fu 2 zh e4 ge 4 ku n4 w h a t ti m e I just be abl e ov er co m e thi s C L dif fic 'When will I be able to overcome this difficulty?'
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
The set of candidate constituents comes from the union of all the constituents suggested by the member parsers.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Our analysis begins with a description of syntactic ambiguity in unvocalized MSA text (§2).
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
We can model this probability straightforwardly enough with a probabilistic version of the grammar just given, which would assign probabilities to the individual rules.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
was done by the participants.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
Taking the intersection of languages in these resources, and selecting languages with large amounts of parallel data, yields the following set of eight Indo-European languages: Danish, Dutch, German, Greek, Italian, Portuguese, Spanish and Swedish.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Surprisingly, this effect is much less obvious for out-of-domain test data.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The bootstrap method has been critized by Riezler and Maxwell (2005) and Collins et al. (2005), as being too optimistic in deciding for statistical significant difference between systems.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
This model admits a simple Gibbs sampling algorithm where the number of latent variables is proportional to the number of word types, rather than the size of a corpus as for a standard HMM sampler (Johnson, 2007).
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
When a company buys another company, a paying event can occur, but these two phrases do not indicate the same event.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
We claim that no particular morphological segmentation is a-priory more likely for surface forms before exploring the compositional nature of syntactic structures, including manifestations of various long-distance dependencies.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
— I would also like to point out to commissioner Liikanen that it is not easy to take a matter to a national court.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
The entire treebank is used in the experiment, but only primary dependencies are considered.4 In all experiments, punctuation tokens are included in the data but omitted in evaluation scores.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Furthermore we expect the label distributions on the foreign to be fairly noisy, because the graph constraints have not been taken into account yet.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
This has solutions: where pI(s|t) is derived from the IN corpus using relative-frequency estimates, and po(s|t) is an instance-weighted model derived from the OUT corpus.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
The confidence intervals are computed by bootstrap resampling for BLEU, and by standard significance testing for the manual scores, as described earlier in the paper.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
However, in existing systems, this expansion come with a steep increase in model complexity.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
36.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
JI!
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
We use w erations of sampling (see Figure 2 for a depiction).
There are clustering approaches that assign a single POS tag to each word type.
0
2 61.7 64.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Each ht is a function that predicts a label (+1 or —1) on examples containing a particular feature xt, while abstaining on other examples: The prediction of the strong hypothesis can then be written as We now briefly describe how to choose ht and at at each iteration.
There is no global pruning.
0
The computing time is given in terms of CPU time per sentence (on a 450MHz PentiumIIIPC).
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
(a) IDictionary D I D:d/0.000 B:b/0.000 B:b/0.000 ( b ) ( c ) ( d ) I B e s t P a t h ( I d ( I ) o D * ) I cps:nd4.!l(l() Figure 2 An abstract example illustrating the segmentation algorithm.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
Decreasing the threshold results in higher mWER due to additional search errors.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
For example, we can easily imagine that the number of paraphrases for “A buys B” is enormous and it is not possible to create a comprehensive inventory by hand.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
2.2 Contextual Role Knowledge.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Replacing this with an ranked evaluation seems to be more suitable.
Their results show that their high performance NER use less training data than other systems.
0
Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
When the same token is to be interpreted as a single lexeme fmnh, it may function as a single adjective “fat”.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Another approach to finding paraphrases is to find phrases which take similar subjects and objects in large corpora by using mutual information of word distribution [Lin and Pantel 01].
It is probably the first analysis of Arabic parsing of this kind.
0
08 84.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
This paper proposes a simple and effective tagging method that directly models tag sparsity and other distributional properties of valid POS tag assignments.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Proper-Name Identification.
It is probably the first analysis of Arabic parsing of this kind.
0
Test set OOV rate is computed using the following splits: ATB (Chiang et al., 2006); CTB6 (Huang and Harper, 2009); Negra (Dubey and Keller, 2003); English, sections 221 (train) and section 23 (test).
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Given this similarity function, we define a nearest neighbor graph, where the edge weight for the n most similar vertices is set to the value of the similarity function and to 0 for all other vertices.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
This is especially true in the case of quotations—which are common in the ATB—where (1) will follow a verb like (2) (Figure 1).
A beam search concept is applied as in speech recognition.
0
For the error counts, a range from 0:0 to 1:0 is used.
They found replacing it with a ranked evaluation to be more suitable.
0
The predominate focus of building systems that translate into English has ignored so far the difficult issues of generating rich morphology which may not be determined solely by local context.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
We are currently working on a complete open source implementation of a training and decoding system, which should become available over the summer. pus, from which also the in-domain test set is taken.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
Depending on the threshold t0, the search algorithm may miss the globally optimal path which typically results in additional translation errors.
This paper talks about Unsupervised Models for Named Entity Classification.
0
Each xii is a member of X, where X is a set of possible features.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
In this section, we extend state to optimize left-to-right queries.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
We showed that any system defined in this way can be recognized in polynomial time.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
In this section for the purposes of showing that polynomial time recognition is possible, we make the additional restriction that the contribution of a derived structure to the input string can be specified by a bounded sequence of substrings of the input.
Combining multiple highly-accurate independent parsers yields promising results.
0
All of these systems were run on data that was not seen during their development.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
Finally, we incorporate the instance-weighting model into a general linear combination, and learn weights and mixing parameters simultaneously. where cλ(s, t) is a modified count for pair (s, t) in OUT, u(s|t) is a prior distribution, and y is a prior weight.
Their results show that their high performance NER use less training data than other systems.
0
, Sun day, then the feature DayOfTheWeek is set to 1.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
More details on the parsing algorithm can be found in Nivre (2003).
This topic has been getting more attention, driven by the needs of various NLP applications.
0
We are focusing on phrases which have two Named Entities (NEs), as those types of phrases are very important for IE applications.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
The German finite verbs 'bin' (second example) and 'konnten' (third example) are too far away from the personal pronouns 'ich' and 'Sie' (6 respectively 5 source sentence positions).
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Many packages perform language model queries.
The texts were annotated with the RSTtool.
0
The wounds are still healing.), entity-attribute (e.g., She 2001), who determined that in their corpus of German computer tests, 38% of relations were lexically signalled.
There is no global pruning.
0
(f1; ;mg ; l) 2 (f1; ;mg n fl; l1g ; l0) !
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
This is the only important case, because otherwise the simple majority combining technique would pick the correct constituent.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
If (wi, r, wj) E A, we say that wi is the head of wj and wj a dependent of wi.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
19 We note that it is not always clear in Wang, Li, and Chang's examples which segmented words.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
To formalize the approach, we introduce four verbgroup states S: Initial (I): A contiguous, initial block of source positions is covered.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
Overall, it gives improvements ranging from 1.1% for German to 14.7% for Italian, for an average improvement of 8.3% over the unsupervised feature-HMM model.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The traveling salesman problem is an optimization problem which is defined as follows: given are a set of cities S = s1; ; sn and for each pair of cities si; sj the cost dij > 0 for traveling from city si to city sj . We are looking for the shortest tour visiting all cities exactly once while starting and ending in city s1.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
The model we use provides a simple framework in which to incorporate a wide variety of lexical information in a uniform way.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
The data is sorted based on the frequency of the context (“a unit of” appeared 314 times in the corpus) and the NE pair instances appearing with that context are shown with their frequency (e.g. “NBC” and “General Electric Co.” appeared 10 times with the context “a unit of”).
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Such discrepancies can be aligned via an intermediate level of PoS tags.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
We focus on phrases which connect two Named Entities (NEs), and proceed in two stages.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
These universal POS categories not only facilitate the transfer of POS information from one language to another, but also relieve us from using controversial evaluation metrics,2 by establishing a direct correspondence between the induced hidden states in the foreign language and the observed English labels.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Most similar is scrolling queries, wherein left-to-right queries that add one word at a time are optimized.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
In this shared task, we were also confronted with this problem, and since we had no funding for paying human judgements, we asked participants in the evaluation to share the burden.
These clusters are computed using an SVD variant without relying on transitional structure.
0
In practice, this sparsity constraint is difficult to incorporate in a traditional POS induction system (Me´rialdo, 1994; Johnson, 2007; Gao and Johnson, 2008; Grac¸a et al., 2009; Berg-Kirkpatrick et al., 2010).
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
8 1 2.