source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
| 0 |
Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
Then, token- level HMM emission parameters are drawn conditioned on these assignments such that each word is only allowed probability mass on a single assigned tag.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
We present KenLM, a library that implements two data structures for efficient language model queries, reducing both time and costs.
|
The corpus was annoted with different linguitic information.
| 0 |
We developed a first version of annotation guidelines for co-reference in PCC (Gross 2003), which served as basis for annotating the core corpus but have not been empirically evaluated for inter-annotator agreement yet.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
8 57.3 +F EA TS be st me dia n 50.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
Otherwise, the scope of the search problem shrinks recursively: if A[pivot] < k then this becomes the new lower bound: l +— pivot; if A[pivot] > k then u +— pivot.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
Asking the annotator to also formulate the question is a way of arriving at more reproducible decisions.
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
8 1 8.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
0 70.9 42.
|
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
| 0 |
We assume that the goal in dependency parsing is to construct a labeled dependency graph of the kind depicted in Figure 1.
|
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
| 0 |
We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first âPresident George Bushâ then âBushâ).
|
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
| 0 |
The machine learning community has been in a similar situation and has studied the combination of multiple classifiers (Wolpert, 1992; Heath et al., 1996).
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
Figure 3 shows a small fragment of the WFST encoding the dictionary, containing both entries forjust discussed, g:t¥ zhonglhua2 min2guo2 (China Republic) 'Republic of China,' and i¥inl.
|
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
| 0 |
To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise.
|
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
| 0 |
PoS tags impose a unique morphological segmentation on surface tokens and present a unique valid yield for syntactic trees.
|
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
| 0 |
Manual annotation results in human in- terpretable grammars that can inform future tree- bank annotation decisions.
|
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
| 0 |
TAG's can be shown to be equivalent to this restricted system.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
an event.
|
They focused on phrases which two Named Entities, and proceed in two stages.
| 0 |
If the expression is longer or complicated (like âA buys Bâ and âAâs purchase of Bâ), it is called âparaphraseâ, i.e. a set of phrases which express the same thing or event.
|
These clusters are computed using an SVD variant without relying on transitional structure.
| 0 |
Our analysis and comparison focuses primarily on the one-to-one accuracy since it is a stricter metric than many-to-one accuracy, but also report many-to-one for completeness.
|
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
| 0 |
The second setting uses the news-related subcorpora for the NIST09 MT Chinese to English evaluation8 as IN, and the remaining NIST parallel Chinese/English corpora (UN, Hong Kong Laws, and Hong Kong Hansard) as OUT.
|
Two general approaches are presented and two combination techniques are described for each approach.
| 0 |
Mi(c) is a binary function returning t when parser i (from among the k parsers) suggests constituent c should be in the parse.
|
Two general approaches are presented and two combination techniques are described for each approach.
| 0 |
The precision and recall of similarity switching and constituent voting are both significantly better than the best individual parser, and constituent voting is significantly better than parser switching in precision.4 Constituent voting gives the highest accuracy for parsing the Penn Treebank reported to date.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
However, the learning curves in Figure 3 show that the Berkeley parser does not exceed our manual grammar by as wide a margin as has been shown for other languages (Petrov, 2009).
|
There is no global pruning.
| 0 |
The details are given in (Och and Ney, 2000).
|
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
| 0 |
3 68.9 50.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
To combat the proliferation of parsing edges, we prune the lattices according to a hand-constructed lexicon of 31 clitics listed in the ATB annotation guidelines (Maamouri et al., 2009a).
|
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
| 0 |
The choice between different actions is in general nondeterministic, and the parser relies on a memorybased classifier, trained on treebank data, to predict the next action based on features of the current parser configuration.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
Coverage indicates the fraction of hypotheses in which the character yield exactly matched the reference.
|
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
| 0 |
Since the inclusion of out-ofdomain test data was a very late decision, the participants were not informed of this.
|
This paper presents a maximum entropy-based named entity recognizer (NER).
| 0 |
A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
As with lexical expections, the semantic classes of co-referring expressions are 4 They may not be perfectly substitutable, for example one NP may be more specific (e.g., âheâ vs. âJohn F. Kennedyâ).
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
Taking /V, to be the number of examples an algorithm classified correctly (where all gold standard items labeled noise were counted as being incorrect), we calculated two measures of accuracy: See Tab.
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
Evaluation We use 8 different measures to evaluate the performance of our system on the joint disambiguation task.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
In these experiments, the input lacks segmentation markers, hence the slightly different dev set baseline than in Table 6.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
constitute names, since we have only their segmentation, not the actual classification of the segmented words.
|
Replacing this with a ranked evaluation seems to be more suitable.
| 0 |
For instance, for out-ofdomain English-French, Systran has the best BLEU and manual scores.
|
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
Since different judges judged different systems (recall that judges were excluded to judge system output from their own institution), we normalized the scores.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
Cohen and Smith (2007) chose a metric like SParseval (Roark et al., 2006) that first aligns the trees and then penalizes segmentation errors with an edit-distance metric.
|
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
| 0 |
When extracting the vector t, used to compute the constraint feature from the graph, we tried three threshold values for r (see Eq.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
Fortunately, we were able to obtain a copy of the full set of sentences from Chang et al. on which Wang, Li, and Chang tested their system, along with the output of their system.18 In what follows we will discuss all cases from this set where our performance on names differs from that of Wang, Li, and Chang.
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
A simple extension will be used to handle this problem.
|
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
| 0 |
The general-language features have a slight advantage over the similarity features, and both are better than the SVM feature.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
30 75.
|
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
| 0 |
For some language pairs (such as GermanEnglish) system performance is more divergent than for others (such as English-French), at least as measured by BLEU.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
JI!
|
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
| 0 |
One may argue with these efforts on normalization, and ultimately their value should be assessed by assessing their impact on inter-annotator agreement.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
na me =>2 ha nzi fa mi ly 2 ha nzi gi ve n 5.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
Twentieth-century linguistic work on Chinese (Chao 1968; Li and Thompson 1981; Tang 1988,1989, inter alia) has revealed the incorrectness of this traditional view.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
The results clearly indicate increased variation in the ATB relative to the WSJ, but care should be taken in assessing the magnitude of the difference.
|
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
| 0 |
This is because different judges focused on different language pairs.
|
This topic has been getting more attention, driven by the needs of various NLP applications.
| 0 |
In total 13,976 phrases are assigned to sets of phrases, and the accuracy on our evaluation data ranges from 65 to 99%, depending on the domain and the size of the sets.
|
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
| 0 |
In graph-based learning approaches one constructs a graph whose vertices are labeled and unlabeled examples, and whose weighted edges encode the degree to which the examples they link have the same label (Zhu et al., 2003).
|
Here we present two algorithms.
| 0 |
AdaBoost was first introduced in (Freund and Schapire 97); (Schapire and Singer 98) gave a generalization of AdaBoost which we will use in this paper.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
The data structure was populated with 64-bit integers sampled uniformly without replacement.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
so that 'door' would be and in this case the hanzi 7C, does not represent a syllable.
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
Search t0 CPU time #search mWER Method [sec] error [%] QmS 0.0 0.07 108 42:6 1.0 0.13 85 37:8 2.5 0.35 44 36:6 5.0 1.92 4 34:6 10.0 10.6 0 34:5 IbmS 0.0 0.14 108 43:4 1.0 0.3 84 39:5 2.5 0.8 45 39:1 5.0 4.99 7 38:3 10.0 28.52 0 38:2 Table 6 shows example translations obtained by the three different approaches.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
Exposing this information to the decoder will lead to better hypothesis recombination.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
Figure 2 shows timing results.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
37 79.
|
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
| 0 |
At first glance, this seems only peripherally related to our work, since the specific/general distinction is made for features rather than instances.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
The PROBING model can perform optimistic searches by jumping to any n-gram without needing state and without any additional memory.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
We can better predict the probability of an unseen hanzi occurring in a name by computing a within-class Good-Turing estimate for each radical class.
|
There is no global pruning.
| 0 |
Note that in line 4 the last visited position for the successor hypothesis must be m. Otherwise , there will be four uncovered positions for the predecessor hypothesis violating the restriction.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
Obviously âLotusâ is part of the following clause rather than being the object of âestimatesâ and the extracted instance makes no sense.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
In the namedentity problem each example is a (spelling,context) pair.
|
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
| 0 |
For instance: if 10 systems participate, and one system does better than 3 others, worse then 2, and is not significant different from the remaining 4, its rank is in the interval 3–7.
|
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
| 0 |
Table 4 shows translation results for the three approaches.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
Training examples are generated automatically by identifying noun phrases that can be easily resolved with their antecedents using lexical and syntactic heuristics.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
The normalization factor plays an important role in the AdaBoost algorithm.
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
The second algorithm extends ideas from boosting algorithms, designed for supervised learning tasks, to the framework suggested by (Blum and Mitchell 98).
|
All the texts were annotated by two people.
| 0 |
In (Reitter, Stede 2003) we went a different way and suggested URML5, an XML format for underspecifying rhetorical structure: a number of relations can be assigned instead of a single one, competing analyses can be represented with shared forests.
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
Ignoring the identity of the target language words e and e0, the possible partial hypothesis extensions due to the IBM restrictions are shown in Table 2.
|
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
We check, how likely only up to k = 20 better scores out of n = 100 would have been generated by two equal systems, using the binomial distribution: If p(0..k; n, p) < 0.05, or p(0..k; n, p) > 0.95 then we have a statistically significant difference between the systems.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
In our case multi-threading is trivial because our data structures are read-only and uncached.
|
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
| 0 |
Another approach to finding paraphrases is to find phrases which take similar subjects and objects in large corpora by using mutual information of word distribution [Lin and Pantel 01].
|
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
| 0 |
Search CPU time mWER SSER Method [sec] [%] [%] MonS 0:9 42:0 30:5 QmS 10:6 34:4 23:8 IbmS 28:6 38:2 26:2 4.2 Performance Measures.
|
All the texts were annotated by two people.
| 0 |
basically complete, yet some improvements and extensions are still under way.
|
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
| 0 |
However, there are phrases which express the same meanings even though they do not share the same keyword.
|
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
| 0 |
Our work is closest to that of Yarowsky and Ngai (2001), but differs in two important ways.
|
There are clustering approaches that assign a single POS tag to each word type.
| 0 |
See Table 2 for the tag set size of other languages.
|
There is no global pruning.
| 0 |
SSER: subjective sentence error rate: For a more detailed analysis, the translations are judged by a human test person.
|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
| 0 |
i..f,..
|
This topic has been getting more attention, driven by the needs of various NLP applications.
| 0 |
We propose an unsupervised method to discover paraphrases from a large untagged corpus, without requiring any seed phrase or other cue.
|
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
| 0 |
Still, for about good number of sentences, we do have this direct comparison, which allows us to apply the sign test, as described in Section 2.2.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
There is a âcore corpusâ of ten commentaries, for which the range of information (except for syntax) has been completed; the remaining data has been annotated to different degrees, as explained below.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
We adopted the MUC6 guidelines for evaluating coreference relationships based on transitivity in anaphoric chains.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
For all other recursive NPs, we add a common annotation to the POS tag of the head (recursiveNPHead).
|
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
| 0 |
We use N(u) to denote the neighborhood of vertex u, and fixed n = 5 in our experiments.
|
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm.
| 0 |
The German finite verbs 'bin' (second example) and 'konnten' (third example) are too far away from the personal pronouns 'ich' and 'Sie' (6 respectively 5 source sentence positions).
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
Given the limited number of judgements we received, we did not try to evaluate this.
|
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
| 0 |
needs to be in initCaps to be considered for this feature.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
and f,.
|
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
| 0 |
.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
We weight edges using a unigram language model estimated with Good- Turing smoothing.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
92 76.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
However, when grammatical relations like subject and object are evaluated, parsing performance drops considerably (Green et al., 2009).
|
This paper talks about Pseudo-Projective Dependency Parsing.
| 0 |
The choice between different actions is in general nondeterministic, and the parser relies on a memorybased classifier, trained on treebank data, to predict the next action based on features of the current parser configuration.
|
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
| 0 |
It is also worth pointing out a connection with Daum´e’s (2007) work that splits each feature into domain-specific and general copies.
|
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
| 0 |
Lossy compressed models RandLM (Talbot and Osborne, 2007) and Sheffield (Guthrie and Hepple, 2010) offer better memory consumption at the expense of CPU and accuracy.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.