source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
The model is often further restricted so that each source word is assigned to exactly one target word (Brown et al., 1993; Ney et al., 2000).
They focused on phrases which two Named Entities, and proceed in two stages.
0
Cluster phrases based on Links We now have a set of phrases which share a keyword.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Tsarfaty (2006) argues that for Semitic languages determining the correct morphological segmentation is dependent on syntactic context and shows that increasing information sharing between the morphological and the syntactic components leads to improved performance on the joint task.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
However, when grammatical relations like subject and object are evaluated, parsing performance drops considerably (Green et al., 2009).
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Sc opi ng filters candidate if outside the anaphor’s scope.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
In all figures, we present the per-sentence normalized judgements.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
In these experiments, the input lacks segmentation markers, hence the slightly different dev set baseline than in Table 6.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Approaches differ in the algorithms used for scoring and selecting the best path, as well as in the amount of contextual information used in the scoring process.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Thus, the language generated by a grammar of a LCFRS is semilinear.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
The number of judgements is additionally fragmented by our breakup of sentences into in-domain and out-of-domain.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
Table 3: Training and test conditions for the Verbmobil task (*number of words without punctuation marks).
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
In Input: {(x1,i, Initialize: Vi, j : e(xi) = 0.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
This is true of the widely used link grammar parser for English (Sleator and Temperley, 1993), which uses a dependency grammar of sorts, the probabilistic dependency parser of Eisner (1996), and more recently proposed deterministic dependency parsers (Yamada and Matsumoto, 2003; Nivre et al., 2004).
They have made use of local and global features to deal with the instances of same token in a document.
0
If any of the tokens from to is in Person-Prefix- List, then another feature Person-Prefix is set to 1.
Here we present two algorithms.
0
A large number of rules is needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classi- However, we show that the use of data can reduce the requirements for supervision to just 7 simple "seed" rules.
Replacing this with a ranked evaluation seems to be more suitable.
0
This is less than the 694 judgements 2004 DARPA/NIST evaluation, or the 532 judgements in the 2005 DARPA/NIST evaluation.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
The normalization on a per-judge basis gave very similar ranking, only slightly less consistent with the ranking from the pairwise comparisons.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
yu2 'fish.'
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
5.1 Parsing Models.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
We use w erations of sampling (see Figure 2 for a depiction).
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The evaluation framework for the shared task is similar to the one used in last year’s shared task.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Our System Wang, Li, and Chang a. 1\!f!IP Eflltii /1\!f!J:P $1til I b. agm: I a m: c. 5 Bf is Bf 1 d. "*:t: w _t ff 1 "* :t: w_tff 1 g., , Transliteration/Translation chen2zhongl-shenl qu3 'music by Chen Zhongshen ' huang2rong2 youlyoul de dao4 'Huang Rong said soberly' zhangl qun2 Zhang Qun xian4zhang3 you2qingl shang4ren2 hou4 'after the county president You Qing had assumed the position' lin2 quan2 'Lin Quan' wang2jian4 'Wang Jian' oulyang2-ke4 'Ouyang Ke' yinl qi2 bu4 ke2neng2 rong2xu3 tai2du2 er2 'because it cannot permit Taiwan Independence so' silfa3-yuan4zhang3 lin2yang2-gang3 'president of the Judicial Yuan, Lin Yanggang' lin2zhangl-hu2 jiangl zuo4 xian4chang3 jie3shuol 'Lin Zhanghu will give an ex­ planation live' jin4/iang3 nian2 nei4 sa3 xia4 de jinlqian2 hui4 ting2zhi3 'in two years the distributed money will stop' gaoltangl da4chi2 ye1zi0 fen3 'chicken stock, a tablespoon of coconut flakes' you2qingl ru4zhu3 xian4fu3 lwu4 'after You Qing headed the county government' Table 5 Performance on morphological analysis.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
It is difficult when IN and OUT are dissimilar, as they are in the cases we study.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Unlike GCFG's, however, the composition operations of LCFRS's are restricted to be linear (do not duplicate unboundedly large structures) and nonerasing (do not erase unbounded structures, a restriction made in most modern transformational grammars).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The resulting structural differences between tree- banks can account for relative differences in parsing performance.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
For example, one parser could be more accurate at predicting noun phrases than the other parsers.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
63 95.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Supervised methods have been applied quite successfully to the full MUC named-entity task (Bikel et al. 97).
A beam search concept is applied as in speech recognition.
0
This number must be less than or equal to n 􀀀 1.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
The observed performance gains, coupled with the simplicity of model implementation, makes it a compelling alternative to existing more complex counterparts.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
In general, the neighborhoods can be more diverse and we allow a soft label distribution over the vertices.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
3.
Here both parametric and non-parametric models are explored.
0
The set of candidate constituents comes from the union of all the constituents suggested by the member parsers.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
The number of judgements is additionally fragmented by our breakup of sentences into in-domain and out-of-domain.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
In Section 6 we dis­ cuss other issues relating to how higher-order language models could be incorporated into the model.
This assumption, however, is not inherent to type-based tagging models.
0
1 74.5 56.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
Different annotations of the same text are mapped into the same data structure, so that search queries can be formulated across annotation levels.
There is no global pruning.
0
e0; e are the last two target words, C is a coverage set for the already covered source positions and j is the last position visited.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
For the manual scoring, we can distinguish only half of the systems, both in terms of fluency and adequacy.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
The problem of "noise" items that do not fall into any of the three categories also needs to be addressed.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
The cost of storing these averages, in bits, is Because there are comparatively few unigrams, we elected to store them byte-aligned and unquantized, making every query faster.
The texts were annotated with the RSTtool.
0
In (Reitter, Stede 2003) we went a different way and suggested URML5, an XML format for underspecifying rhetorical structure: a number of relations can be assigned instead of a single one, competing analyses can be represented with shared forests.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
It is natural that the larger the data in the domain, the more keywords are found.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
keywords Step 3 Sets of phrases based on keywords Step 4 Links between sets of phrases All the contexts collected for a given domain are gathered in a bag and the TF/ITF scores are calculated for all the words except stopwords in the bag.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
Only one of the first n positions which are not already aligned in a partial hypothesis may be chosen, where n is set to 4.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
logical rules, and personal names; the transitive closure of the resulting machine is then computed.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Model components cascade, so the row corresponding to +FEATS also includes the PRIOR component (see Section 3).
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The manual scores are averages over the raw unnormalized scores.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
Similar behavior is observed when adding features.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
The focus of this work is on building POS taggers for foreign languages, assuming that we have an English POS tagger and some parallel text between the two languages.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.
There are clustering approaches that assign a single POS tag to each word type.
0
In contrast to results reported in Johnson (2007), we found that the per P (Ti|T −i, β) n (f,v)∈Wi P (v|Ti, f, W −i, T −i, β) formance of our Gibbs sampler on the basic 1TW model stabilized very quickly after about 10 full it All of the probabilities on the right-hand-side are Dirichlet, distributions which can be computed analytically given counts.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
For the seen word ir, 'gen­ erals,' there is an c:NC transduction from to the node preceding ir,; this arc has cost cost( f,) - cost(unseen(f,)), so that the cost of the whole path is the desired cost( f,).
This paper conducted research in the area of automatic paraphrase discovery.
0
Evaluation results for links
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
In the next section, we show how an ATM can accept the strings generated by a grammar in a LCFRS formalism in logspace, and hence show that each family can be recognized in polynomial time.
Two general approaches are presented and two combination techniques are described for each approach.
0
The development set contained 44088 constituents in 2416 sentences and the test set contained 30691 constituents in 1699 sentences.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
3.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
As illustrated by MCTAG's, it is possible for a formalism to give tree sets with bounded dependent paths while still sharing the constrained rewriting properties of CFG's, HG's, and TAG's.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
A less canonical representation of segmental morphology is triggered by a morpho-phonological process of omitting the definite article h when occurring after the particles b or l. This process triggers ambiguity as for the definiteness status of Nouns following these particles.We refer to such cases in which the concatenation of elements does not strictly correspond to the original surface form as super-segmental morphology.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
Even when there is training data available in the domain of interest, there is often additional data from other domains that could in principle be used to improve performance.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Table 1: Syntactic Seeding Heuristics BABAR’s reliable case resolution heuristics produced a substantial set of anaphor/antecedent resolutions that will be the training data used to learn contextual role knowledge.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Purely statistical approaches have not been very popular, and so far as we are aware earlier work by Sproat and Shih (1990) is the only published instance of such an approach.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
As the reviewer also points out, this is a problem that is shared by, e.g., probabilistic context-free parsers, which tend to pick trees with fewer nodes.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
For all variants, we found that BerkeleyLM always rounds the floating-point mantissa to 12 bits then stores indices to unique rounded floats.
All the texts were annotated by two people.
0
Due to the dramatic fiscal situation in Brandenburg she now surprisingly withdrew legislation drafted more than a year ago, and suggested to decide on it not before 2003.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
As can be seen in the example, the first two phrases have a different order of NE names from the last two, so we can determine that the last two phrases represent a reversed relation.
The AdaBoost algorithm was developed for supervised learning.
0
Many statistical or machine-learning approaches for natural language problems require a relatively large amount of supervision, in the form of labeled training examples.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Denote by g3(x) = Et crithl(x) , j E {1,2} the unthresholded strong-hypothesis (i.e., f3 (x) = sign(gi (x))).
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
(Brandt 1996) extended these ideas toward a conception of kommunikative Gewichtung (‘communicative-weight assignment’).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
We compare the manually annotated grammar, which we incorporate into the Stanford parser, to both the Berkeley (Petrov et al., 2006) and Bikel (Bikel, 2004) parsers.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
As such, global information from the whole context of a document is important to more accurately recognize named entities.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
For instance: if 10 systems participate, and one system does better than 3 others, worse then 2, and is not significant different from the remaining 4, its rank is in the interval 3–7.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
One hopes that such a corpus will be forth­ coming.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
For example, one parser could be more accurate at predicting noun phrases than the other parsers.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999).
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
For example, a story can mention “the FBI”, “the White House”, or “the weather” without any prior referent in the story.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
The first four affixes are so-called resultative affixes: they denote some prop­ erty of the resultant state of a verb, as in E7 wang4bu4-liao3 (forget-not-attain) 'cannot forget.'
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The errors shown are from the Berkeley parser output, but they are representative of the other two parsing models.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective treebanks is often neglected because of the relative scarcity of problematic constructions.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
Pr(eI 1) is the language model of the target language, whereas Pr(fJ 1 jeI1) is the transla tion model.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
More judgements would have enabled us to make better distinctions, but it is not clear what the upper limit is.
The AdaBoost algorithm was developed for supervised learning.
0
We now describe the CoBoost algorithm for the named entity problem.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
2.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The approach has been successfully tested on the 8 000-word Verbmobil task.
A beam search concept is applied as in speech recognition.
0
The translation of one position in the source sentence may be postponed for up to L = 3 source positions, and the translation of up to two source positions may be anticipated for at most R = 10 source positions.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
na me =>1 ha nzi fa mi ly 2 ha nzi gi ve n 3.
They focused on phrases which two Named Entities, and proceed in two stages.
0
We evaluated the results based on two metrics.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
Standard SMT systems have a hierarchical parameter structure: top-level log-linear weights are used to combine a small set of complex features, interpreted as log probabilities, many of which have their own internal parameters and objectives.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
A very small excerpt from an Italian-English graph is shown in Figure 1.
They focused on phrases which two Named Entities, and proceed in two stages.
0
The links can solve the problem.
This paper conducted research in the area of automatic paraphrase discovery.
0
Smith estimates Lotus will make a profit this quarter…”, our system extracts “Smith esti mates Lotus” as an instance.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
SRILM’s compact variant, IRSTLM, MITLM, and BerkeleyLM’s sorted variant are all based on this technique.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
While the semantic aspect of radicals is by no means completely predictive, the semantic homogeneity of many classes is quite striking: for example 254 out of the 263 examples (97%) of the INSECT class listed by Wieger (1965, 77376) denote crawling or invertebrate animals; similarly 21 out of the 22 examples (95%) of the GHOST class (page 808) denote ghosts or spirits.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Figure 2 shows timing results.
They found replacing it with a ranked evaluation to be more suitable.
0
While automatic measures are an invaluable tool for the day-to-day development of machine translation systems, they are only a imperfect substitute for human assessment of translation quality, or as the acronym BLEU puts it, a bilingual evaluation understudy.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
We incur some additional memory cost due to storing state in each hypothesis, though this is minimal compared with the size of the model itself.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
For each terminal, the Leaf Ancestor metric extracts the shortest path to the root.
Here both parametric and non-parametric models are explored.
0
We see from these results that the behavior of the parametric techniques are robust in the presence of a poor parser.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
(Again, the goal of also in structural features.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Coverage indicates the fraction of hypotheses in which the character yield exactly matched the reference.