source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The scores and confidence intervals are detailed first in the Figures 7–10 in table form (including ranks), and then in graphical form in Figures 11–16.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
We assumed that such a contrastive assessment would be beneficial for an evaluation that essentially pits different systems against each other.
They focused on phrases which two Named Entities, and proceed in two stages.
0
At this step, we will try to link those sets, and put them into a single cluster.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
4.1 Dictionary Representation.
Combining multiple highly-accurate independent parsers yields promising results.
0
Call the crossing constituents A and B.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Assigning rhetorical relations thus poses questions that can often be answered only subjectively.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Our second point of comparison is with Grac¸a et al.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
In all figures, we present the per-sentence normalized judgements.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
We create equivalence classes for verb, noun, and adjective POS categories.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
We evaluate our model on seven languages exhibiting substantial syntactic variation.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Language models that contain wi must also contain prefixes wi for 1 G i G k. Therefore, when the model is queried for p(wnjwn−1 1 ) but the longest matching suffix is wnf , it may return state s(wn1) = wnf since no longer context will be found.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
The dependency graph in Figure 1 satisfies all the defining conditions above, but it fails to satisfy the condition ofprojectivity (Kahane et al., 1998): The arc connecting the head jedna (one) to the dependent Z (out-of) spans the token je (is), which is not dominated by jedna.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Such ambiguities cause discrepancies between token boundaries (indexed as white spaces) and constituent boundaries (imposed by syntactic categories) with respect to a surface form.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
A spelling rule might be a simple look-up for the string (e.g., a rule that Honduras is a location) or a rule that looks at words within a string (e.g., a rule that any string containing Mr. is a person).
Here both parametric and non-parametric models are explored.
0
We call such a constituent an isolated constituent.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
Model Overview The model starts by generating a tag assignment T for each word type in a vocabulary, assuming one tag per word.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
So far the graph has been completely unlabeled.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Secondly, for all our models we provide better fine- and coarse-grained POS-tagging accuracy, and all pruned models outperform the Oracle results reported by them.12 In terms of syntactic disambiguation, even the simplest grammar pruned with HSPELL outperforms their non-Oracle results.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
The function A : F —* C maps from the language specific fine-grained tagset F to the coarser universal tagset C and is described in detail in §6.2: Note that when tx(y) = 1 the feature value is 0 and has no effect on the model, while its value is −oc when tx(y) = 0 and constrains the HMM’s state space.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
The BerkeleyLM direct-mapped cache is in principle faster than caches implemented by RandLM and by IRSTLM, so we may write a C++ equivalent implementation as future work.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
During coreference resolution, BABAR checks (1) whether the anaphor is among the lexical expectations for the caseframe that extracts the candidate antecedent, and (2) whether the candidate is among the lexical expectations for the caseframe that extracts the anaphor.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Because of this, we retokenized and lowercased submitted output with our own tokenizer, which was also used to prepare the training and test data.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
Most of these groups follow a phrase-based statistical approach to machine translation.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
This variant is tested in the experiments below.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Informally, two or more paths can be dependent on each other: for example, they could be required to be of equal length as in the trees in Figure 4. generates such a tree set.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Each word is terminated by an arc that represents the transduction between f and the part of speech of that word, weighted with an estimated cost for that word.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
(7) φ s,t This is a somewhat less direct objective than used by Matsoukas et al, who make an iterative approximation to expected TER.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
We include a list of per-category results for selected phrasal labels, POS tags, and dependencies in Table 8.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Interpolation search is therefore a form of binary search with better estimates informed by the uniform key distribution.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
This distributional sparsity of syntactic tags is not unique to English 1 The source code for the work presented in this paper is available at http://groups.csail.mit.edu/rbg/code/typetagging/.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Thus, rather than give a single evaluative score, we prefer to compare the performance of our method with the judgments of several human subjects.
A beam search concept is applied as in speech recognition.
0
However there is no global pruning.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Process statistics are already collected by the kernel (and printing them has no meaningful impact on performance).
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
In both cases the investigators were able to achieve significant improvements over the previous best tagging results.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
4.1 Corpora.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
In contrast, even though context can be helpful for resolving definite NPs, context can be trumped by the semantics of the nouns themselves.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
We insisted that each structure dominates a bounded number of (not necessarily adjacent) substrings.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Our test machine has two Intel Xeon E5410 processors totaling eight cores, 32 GB RAM, and four Seagate Barracuda disks in software RAID 0 running Linux 2.6.18.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Again, this deserves further investigation.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
This paper proposes a simple and effective tagging method that directly models tag sparsity and other distributional properties of valid POS tag assignments.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Also, in Information Extraction (IE), in which the system tries to extract elements of some events (e.g. date and company names of a corporate merger event), several event instances from different news articles have to be aligned even if these are expressed differently.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
We can check, what the consequences of less manual annotation of results would have been: With half the number of manual judgements, we can distinguish about 40% of the systems, 10% less.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
In practice, we can therefore expect a trade-off such that increasing the amount of information encoded in arc labels will cause an increase in the accuracy of the inverse transformation but a decrease in the accuracy with which the parser can construct the labeled representations.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
These knowledge sources determine whether the contexts surrounding an anaphor and antecedent are compatible.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
97 81.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
This paper discusses the use of unlabeled examples for the problem of named entity classification.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Thus in an English sentence such as I'm going to show up at the ACL one would reasonably conjecture that there are eight words separated by seven spaces.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Not all the layers have been produced for all the texts yet.
Here we present two algorithms.
0
Inspection of the data shows that at n = 2500, the two classifiers both give labels on 44,281 (49.2%) of the unlabeled examples, and give the same label on 99.25% of these cases.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Unfortunately, there is no standard corpus of Chinese texts, tagged with either single or multiple human judgments, with which one can compare performance of various methods.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
We follow the guidelines developed in the TIGER project (Brants et al. 2002) for syntactic annotation of German newspaper text, using the Annotate3 tool for interactive construction of tree structures.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The computing time, the number of search errors, and the multi-reference WER (mWER) are shown as a function of t0.
The corpus was annoted with different linguitic information.
0
structure Besides the applications just sketched, the over- arching goal of developing the PCC is to build up an empirical basis for investigating phenomena of discourse structure.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
In a more recent study than Chang et al., Wang, Li, and Chang (1992) propose a surname-driven, non-stochastic, rule-based system for identifying personal names.17 Wang, Li, and Chang also compare their performance with Chang et al.'s system.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
(a) ;IE shi4 'be' => ;IE;IE shi4bu2-shi4 (be-not-be) 'is it?'
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
Evaluation metrics used are Attachment Score (AS), i.e. the proportion of tokens that are attached to the correct head, and Exact Match (EM), i.e. the proportion of sentences for which the dependency graph exactly matches the gold standard.
This paper talks about Unsupervised Models for Named Entity Classification.
0
The following features were used: full-string=x The full string (e.g., for Maury Cooper, full- s tring=Maury_Cooper). contains(x) If the spelling contains more than one word, this feature applies for any words that the string contains (e.g., Maury Cooper contributes two such features, contains (Maury) and contains (Cooper) . allcapl This feature appears if the spelling is a single word which is all capitals (e.g., IBM would contribute this feature). allcap2 This feature appears if the spelling is a single word which is all capitals or full periods, and contains at least one period.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
Section 4 draws some conclusions from the present state of the effort.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Thus, as in CFG's, at any point in the derivation, the set of structures that can be applied is determined only by a finite set of rules encapsulated by the grammar.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Cohen and Smith approach this by introducing the α hyperparameter, which performs best when optimized independently for each sentence (cf.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Note that Chang, Chen, and Chen (1991), in addition to word-frequency information, include a constraint-satisfication model, so their method is really a hybrid approach.
There are clustering approaches that assign a single POS tag to each word type.
0
Learned Tag Prior (PRIOR) We next assume there exists a single prior distribution ψ over tag assignments drawn from DIRICHLET(β, K ).
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
The translation scores for the hypotheses generated with different threshold values t0 are compared to the translation scores obtained with a conservatively large threshold t0 = 10:0 . For each test series, we count the number of sentences whose score is worse than the corresponding score of the test series with the conservatively large threshold t0 = 10:0, and this number is reported as the number of search errors.
Here we present two algorithms.
0
For the moment we will assume that there are only two possible labels: each y, is in { —1, +1}.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
For queries, we uniformly sampled 10 million hits and 10 million misses.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
level.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
This smooth guarantees that there are no zeroes estimated.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
That is, given a choice between segmenting a sequence abc into abc and ab, c, the former will always be picked so long as its cost does not exceed the summed costs of ab and c: while; it is possible for abc to be so costly as to preclude the larger grouping, this will certainly not usually be the case.
Replacing this with a ranked evaluation seems to be more suitable.
0
Out-of-domain test data is from the Project Syndicate web site, a compendium of political commentary.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
We have not to date explored these various options.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Memory usage is likely much lower than ours. fThe original paper (Germann et al., 2009) provided only 2s of query timing and compared with SRI when it exceeded available RAM.
There is no global pruning.
0
The resulting algorithm is depicted in Table 1.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Recently, this topic has been getting more attention, as is evident from the Paraphrase Workshops in 2003 and 2004, driven by the needs of various NLP applications.
The texts were annotated with the RSTtool.
0
As an indication, in our core corpus, we found an average sentence length of 15.8 words and 1.8 verbs per sentence, whereas a randomly taken sample of ten commentaries from the national papers Su¨ddeutsche Zeitung and Frankfurter Allgemeine has 19.6 words and 2.1 verbs per sentence.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
MENE has only been tested on MUC7.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
7 Conclusion and Future Work.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
For parsing, the most challenging form of ambiguity occurs at the discourse level.
The corpus was annoted with different linguitic information.
0
Unexpectedly, because the ministries of treasury and education both had prepared the teacher plan together.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
The question is how to normalize the probabilities in such a way that smaller groupings have a better shot at winning.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
This can be repeated several times to collect a list of author / book title pairs and expressions.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Under this scheme, n human judges are asked independently to segment a text.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
After the first step towards breadth had been taken with the PoS-tagging, RST annotation, and URML conversion of the entire corpus of 170 texts12 , emphasis shifted towards depth.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Consequently, we cut their evidence values in half to lessen their influence.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Our system fails in (a) because of$ shenl, a rare family name; the system identifies it as a family name, whereas it should be analyzed as part of the given name.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
We have developed a general approach for combining parsers when preserving the entire structure of a parse tree is important.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
NN � .e NP NNP NP DTNNP NN � .e NP NP NNP NP Table 5: Evaluation of 100 randomly sampled variation nuclei types.
It is probably the first analysis of Arabic parsing of this kind.
0
We map the ATB morphological analyses to the shortened “Bies” tags for all experiments.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
—ested in correlations between prosody and dis course structure.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
We compare the manually annotated grammar, which we incorporate into the Stanford parser, to both the Berkeley (Petrov et al., 2006) and Bikel (Bikel, 2004) parsers.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Automatic scores are computed on a larger tested than manual scores (3064 sentences vs. 300–400 sentences). collected manual judgements, we do not necessarily have the same sentence judged for both systems (judges evaluate 5 systems out of the 8–10 participating systems).
This paper talks about Pseudo-Projective Dependency Parsing.
0
It is likely that the more complex cases, where path information could make a difference, are beyond the reach of the parser in most cases.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Compared to the EMEA/EP setting, the two domains in the NIST setting are less homogeneous and more similar to each other; there is also considerably more IN text available.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Word Head Of Complement POS 1 '01 inna “Indeed, truly” VP Noun VBP 2 '01 anna “That” SBAR Noun IN 3 01 in “If” SBAR Verb IN 4 01 an “to” SBAR Verb IN Table 1: Diacritized particles and pseudo-verbs that, after orthographic normalization, have the equivalent surface form 0 an.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Figure 3 shows a small fragment of the WFST encoding the dictionary, containing both entries forjust discussed, g:t¥ zhonglhua2 min2guo2 (China Republic) 'Republic of China,' and i¥inl.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Relative pronouns with only 1 NP in scope..
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
yields.6 But to its great advantage, it has a high ratio of non-terminals/terminals (μ Constituents / μ Length).
This topic has been getting more attention, driven by the needs of various NLP applications.
0
In total, for the 2,000 NE category pairs, 5,184 keywords are found.