source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The resulting model is compact, efficiently learnable and linguistically expressive.
0
There are two key benefits of this model architecture.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
The only way to handle such phenomena within the framework described here is simply to expand out the reduplicated forms beforehand, and incorporate the expanded forms into the lexical transducer.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
e0; e are the last two target words, C is a coverage set for the already covered source positions and j is the last position visited.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
More details on the parsing algorithm can be found in Nivre (2003).
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Skipped (K): The translation of up to one word may be postponed . Verb (V): The translation of up to two words may be anticipated.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
The approach gains leverage from natural redundancy in the data: for many named-entity instances both the spelling of the name and the context in which it appears are sufficient to determine its type.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
This is not completely surprising, since all systems use very similar technology.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Tsarfaty and Sima’an (2007) have reported state-of-the-art results on Hebrew unlexicalized parsing (74.41%) albeit assuming oracle morphological segmentation.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
However, for our purposes it is not sufficient to repre­ sent the morphological decomposition of, say, plural nouns: we also need an estimate of the cost of the resulting word.
This paper talks about Unsupervised Models for Named Entity Classification.
0
In this case nonalpha is the string formed by removing all upper/lower case letters from the spelling (e.g., for Thomas E. Petry nonalpha= .
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
'Malaysia.'
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
With a human evaluation we also showed that ATB inter-annotator agreement remains low relative to the WSJ corpus.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Little attention, however, has been paid to the structural descriptions that these formalisms can assign to strings, i.e. their strong generative capacity.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Figure 2 shows a screenshot (which is of somewhat limited value, though, as color plays a major role in signalling the different statuses of the information).
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
The following three sections elaborate these different stages is more detail.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
shortest match at each point.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Often, two systems can not be distinguished with a confidence of over 95%, so there are ranked the same.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Chunking is not enough to find such relationships.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
The availability of comparable corpora is limited, which is a significant limitation on the approach.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
We retain segmentation markers—which are consistent only in the vocalized section of the treebank—to differentiate between e.g. � “they” and � + “their.” Because we use the vocalized section, we must remove null pronoun markers.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
We carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
The discovered paraphrases can be a big help to reduce human labor and create a more comprehensive pattern set.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The text type are editorials instead of speech transcripts.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
This data set of manual judgements should provide a fruitful resource for research on better automatic scoring methods.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
First, we describe how the caseframes are represented and learned.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
These are shown, with their associated costs, as follows: ABj nc 4.0 AB C/jj 6.0 CD /vb 5.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
In all cases, the key is collapsed to its 64-bit hash.
They focused on phrases which two Named Entities, and proceed in two stages.
0
In this subsection, we will report the results of the experiment, in terms of the number of words, phrases or clusters.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
If an unlabeled vertex does not have a path to any labeled vertex, this term ensures that the converged marginal for this vertex will be uniform over all tags, allowing the middle word of such an unlabeled vertex to take on any of the possible tags.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
It was motivated by the observation that the (Yarowsky 95) algorithm added a very large number of rules in the first few iterations.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Because all English vertices are going to be labeled, we do not need to disambiguate them by embedding them in trigrams.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
At each point during the derivation, the parser has a choice between pushing the next input token onto the stack – with or without adding an arc from the token on top of the stack to the token pushed – and popping a token from the stack – with or without adding an arc from the next input token to the token popped.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
Standard SMT systems have a hierarchical parameter structure: top-level log-linear weights are used to combine a small set of complex features, interpreted as log probabilities, many of which have their own internal parameters and objectives.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
There are two key benefits of this model architecture.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
and f,.
They found replacing it with a ranked evaluation to be more suitable.
0
In Figure 4, we displayed the number of system comparisons, for which we concluded statistical significance.
It is probably the first analysis of Arabic parsing of this kind.
0
The results clearly indicate increased variation in the ATB relative to the WSJ, but care should be taken in assessing the magnitude of the difference.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
We model each parse as the decisions made to create it, and model those decisions as independent events.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Two of the Mainlanders also cluster close together but, interestingly, not particularly close to the Taiwan speakers; the third Mainlander is much more similar to the Taiwan speakers.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Confidence Interval: To estimate confidence intervals for the average mean scores for the systems, we use standard significance testing.
The texts were annotated with the RSTtool.
0
In order to evaluate and advance this approach, it helps to feed into the knowledge base data that is already enriched with some of the desired information — as in PCC.
Replacing this with a ranked evaluation seems to be more suitable.
0
This data set of manual judgements should provide a fruitful resource for research on better automatic scoring methods.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
At first glance, we quickly recognize that many systems are scored very similar, both in terms of manual judgement and BLEU.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Note also that the costs currently used in the system are actually string costs, rather than word costs.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
The development of automatic scoring methods is an open field of research.
They focused on phrases which two Named Entities, and proceed in two stages.
0
We will report the evaluation results in the next subsection.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
By contrast, when we turn to a comparison of the three encoding schemes it is hard to find any significant differences, and the overall impression is that it makes little or no difference which encoding scheme is used, as long as there is some indication of which words are assigned their linear head instead of their syntactic head by the projective parser.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
But it conflates the coordinating and discourse separator functions of wa (<..4.b � �) into one analysis: conjunction(Table 3).
There is no global pruning.
0
For a trigram language model, the partial hypotheses are of the form (e0; e; C; j).
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
We suggest that in unlexicalized PCFGs the syntactic context may be explicitly modeled in the derivation probabilities.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
(3) shows learning curves for CoBoost.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
We evaluate our model on seven languages exhibiting substantial syntactic variation.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Here, the term frequency (TF) is the frequency of a word in the bag and the inverse term frequency (ITF) is the inverse of the log of the frequency in the entire corpus.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
We can now add a new weak hypothesis 14 based on a feature in X1 with a confidence value al hl and atl are chosen to minimize the function We now define, for 1 <i <n, the following virtual distribution, As before, Ztl is a normalization constant.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
There is a sizable literature on Chinese word segmentation: recent reviews include Wang, Su, and Mo (1990) and Wu and Tseng (1993).
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
If e < b then the key is not found.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
The bootstrap method has been critized by Riezler and Maxwell (2005) and Collins et al. (2005), as being too optimistic in deciding for statistical significant difference between systems.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
To lower the barrier of entrance to the competition, we provided a complete baseline MT system, along with data resources.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
2 Chinese ?l* han4zi4 'Chinese character'; this is the same word as Japanese kanji..
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
To set β, we used the same criterion as for α, over a dev corpus: The MAP combination was used for TM probabilities only, in part due to a technical difficulty in formulating coherent counts when using standard LM smoothing techniques (Kneser and Ney, 1995).3 Motivated by information retrieval, a number of approaches choose “relevant” sentence pairs from OUT by matching individual source sentences from IN (Hildebrand et al., 2005; L¨u et al., 2007), or individual target hypotheses (Zhao et al., 2004).
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
This has led previous workers to adopt ad hoc linear weighting schemes (Finch and Sumita, 2008; Foster and Kuhn, 2007; L¨u et al., 2007).
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
BABAR uses information extraction patterns to identify contextual roles and creates four contextual role knowledge sources using unsupervised learning.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
5 67.3 55.
Replacing this with a ranked evaluation seems to be more suitable.
0
For statistics on this test set, refer to Figure 1.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
More details on the memory-based prediction can be found in Nivre et al. (2004) and Nivre and Scholz (2004).
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
Intuitively, as suggested by the example in the introduction, this is the right granularity to capture domain effects.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
The Input The set of analyses for a token is thus represented as a lattice in which every arc corresponds to a specific lexeme l, as shown in Figure 1.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
An input ABCD can be represented as an FSA as shown in Figure 2(b).
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
(For some recent corpus-based work on Chinese abbreviations, see Huang, Ahrens, and Chen [1993].)
It is probably the first analysis of Arabic parsing of this kind.
0
Next we show that the ATB is similar to other tree- banks in gross statistical terms, but that annotation consistency remains low relative to English (§3).
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
An alternate approximation to (8) would be to let w,\(s, t) directly approximate pˆI(s, t).
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
07 80.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
able at http://nlp.stanford.edu/projects/arabic.shtml.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
However, until such standards are universally adopted in evaluating Chinese segmenters, claims about performance in terms of simple measures like percent correct should be taken with a grain of salt; see, again, Wu and Fung (1994) for further arguments supporting this conclusion.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
A promising direction for future work is to explicitly model a distribution over tags for each word type.
Here we present two algorithms.
0
Otherwise, label the training data with the combined spelling/contextual decision list, then induce a final decision list from the labeled examples where all rules (regardless of strength) are added to the decision list.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Topicalization of NP subjects in SVO configurations causes confusion with VO (pro-drop).
The second algorithm builds on a boosting algorithm called AdaBoost.
0
For example, a good classifier would identify Mrs. Frank as a person, Steptoe & Johnson as a company, and Honduras as a location.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
This is demonstrated by average scores over all systems, in terms of BLEU, fluency and adequacy, as displayed in Figure 5.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
A step of an ATM consists of reading a symbol from each tape and optionally moving each head to the left or right one tape cell.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Figure 2 shows an excerpt of a sentence from the Italian test set and the tags assigned by four different models, as well as the gold tags.
All the texts were annotated by two people.
0
Conversely, we can use the full rhetorical tree from the annotations and tune the co-reference module.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
While the generative power of CG's is greater that of CFG's, it appears to be highly constrained.
The manual evaluation of scoring translation on a graded scale from 1&#8211;5 seemed to be very hard to perform.
0
So, this was a surprise element due to practical reasons, not malice.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Table 1 briefly describes the seven syntactic heuristics used by BABAR to resolve noun phrases.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
In contrast, NNP (proper nouns) form a large portion of vocabulary.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Because conjunctions are elevated in the parse trees when they separate recursive constituents, we choose the right sister instead of the category of the next word.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Our results outperform strong unsupervised baselines as well as approaches that rely on direct projections, and bridge the gap between purely supervised and unsupervised POS tagging models.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
For each domain, phrases which contain the same keyword are gathered to build a set of phrases (Step 3).
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Systems that generally do worse than others will receive a negative one.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Much of the study of grammatical systems in computational linguistics has been focused on the weak generative capacity of grammatical formalism.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
In application settings, this may be a profitable strategy.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
64 76.
The texts were annotated with the RSTtool.
0
In an experiment on automatic rhetorical parsing, the RST-annotations and PoS tags were used by (Reitter 2003) as a training corpus for statistical classification with Support Vector Machines.
This paper conducted research in the area of automatic paraphrase discovery.
0
One is the accuracy within a set of phrases which share the same keyword; the other is the accuracy of links.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Participants were also provided with two sets of 2,000 sentences of parallel text to be used for system development and tuning.
The corpus was annoted with different linguitic information.
0
What ought to be developed now is an annotation tool that can make use of the format, allow for underspecified annotations and visualize them accordingly.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
We smooth Prf(p —* (s, p)) for rare and OOV segments (s E l, l E L, s unseen) using a “per-tag” probability distribution over rare segments which we estimate using relative frequency estimates for once-occurring segments.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Eight out of the thirteen errors in the high frequency phrases in the CC-domain are the phrases in “agree”.