source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
Our initial experimentation with the evaluation tool showed that this is often too overwhelming.
|
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
| 0 |
We used C = 1.0 as the L2 regularization constant in (Eq.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
3.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
Word type N % Dic tion ary entr ies 2 , 5 4 3 9 7 . 4 7 Mor pho logi call y deri ved wor ds 3 0 . 1 1 Fore ign tran slite rati ons 9 0 . 3 4 Per son al na mes 5 4 2 . 0 7 cases.
|
There is no global pruning.
| 0 |
The 13 positions of the source sentence are processed in the order shown.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
Specifically, we assume each word type W consists of feature-value pairs (f, v).
|
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
| 0 |
88,962 (spelling,context) pairs were extracted as training data.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
Assigning rhetorical relations thus poses questions that can often be answered only subjectively.
|
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
| 0 |
2.2.4 Semantic Caseframe Expectations The third type of contextual role knowledge learned by BABAR is Semantic Caseframe Expectations.
|
This corpus has several advantages: it is annotated at different levels.
| 0 |
Future work along these lines will incorporate other layers of annotation, in particular the syntax information.
|
All the texts were annotated by two people.
| 0 |
Links can be of two different kinds: anaphoric or bridging (definite noun phrases picking up an antecedent via world-knowledge).
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
Initially, we planned to compare the semantic classes of an anaphor and a candidate and infer that they might be coreferent if their semantic classes intersected.
|
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
| 0 |
The translation direction is from German to English.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
Our analysis begins with a description of syntactic ambiguity in unvocalized MSA text (§2).
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
set was based on an earlier version of the Chang et a!.
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
Our test machine has two Intel Xeon E5410 processors totaling eight cores, 32 GB RAM, and four Seagate Barracuda disks in software RAID 0 running Linux 2.6.18.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
Reference resolution involves finding words that co-refer to the same entity.
|
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
| 0 |
It is also worth pointing out a connection with Daum´e’s (2007) work that splits each feature into domain-specific and general copies.
|
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
| 0 |
Our experimental setup therefore is designed to serve two goals.
|
Replacing this with a ranked evaluation seems to be more suitable.
| 0 |
It was our hope that this competition, which included the manual and automatic evaluation of statistical systems and one rulebased commercial system, will give further insight into the relation between automatic and manual evaluation.
|
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
| 0 |
So, who won the competition?
|
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
| 0 |
Systran submitted their commercial rule-based system that was not tuned to the Europarl corpus.
|
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
| 0 |
Our results suggest that it is possible to learn accurate POS taggers for languages which do not have any annotated data, but have translations into a resource-rich language.
|
This paper conducted research in the area of automatic paraphrase discovery.
| 0 |
Other errors include NE tagging errors and errors due to a phrase which includes other NEs.
|
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
| 0 |
For each experiment we gave an nonparametric and a parametric technique for combining parsers.
|
There is no global pruning.
| 0 |
An error count of 0:0 is assigned to a perfect translation, and an error count of 1:0 is assigned to a semantically and syntactically wrong translation.
|
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
| 0 |
It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.
|
Replacing this with a ranked evaluation seems to be more suitable.
| 0 |
For each sentence, we counted how many n-grams in the system output also occurred in the reference translation.
|
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
| 0 |
32 81.
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
As with personal names, we also derive an estimate from text of the probability of finding a transliterated name of any kind (PTN).
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
(2009).
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
Nu mb er filters candidate if number doesnât agree.
|
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
| 0 |
It is likely that the more complex cases, where path information could make a difference, are beyond the reach of the parser in most cases.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
Due to the dramatic fiscal situation in Brandenburg she now surprisingly withdrew legislation drafted more than a year ago, and suggested to decide on it not before 2003.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
For the manual scoring, we can distinguish only half of the systems, both in terms of fluency and adequacy.
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
Model components cascade, so the row corresponding to +FEATS also includes the PRIOR component (see Section 3).
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
The sign test checks, how likely a sample of better and worse BLEU scores would have been generated by two systems of equal performance.
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
While sorted arrays could be used to implement the same data structure as PROBING, effectively making m = 1, we abandoned this implementation because it is slower and larger than a trie implementation.
|
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
| 0 |
This design does not guarantee âstructural zeros,â but biases towards sparsity.
|
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
| 0 |
The availability of these resources guided our selection of foreign languages.
|
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
| 0 |
We used it to score all phrase pairs in the OUT table, in order to provide a feature for the instance-weighting model.
|
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
| 0 |
We formulate the update as follows: where ∀ui ∈ Vf \ Vfl, γi(y) and κi are defined as: We ran this procedure for 10 iterations.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
In the pinyin transliterations a dash(-) separates syllables that may be considered part of the same phonological word; spaces are used to separate plausible phonological words; and a plus sign (+) is used, where relevant, to indicate morpheme boundaries of interest.
|
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
The scores and confidence intervals are detailed first in the Figures 7–10 in table form (including ranks), and then in graphical form in Figures 11–16.
|
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
| 0 |
The statistical systems seem to still lag behind the commercial rule-based competition when translating into morphological rich languages, as demonstrated by the results for English-German and English-French.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
All three curves remain steep at the maximum training set size of 18818 trees.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
Morphological Analyzer Ideally, we would use an of-the-shelf morphological analyzer for mapping each input token to its possible analyses.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
This process produces a large set of caseframes coupled with a list of the noun phrases that they extracted.
|
Two general approaches are presented and two combination techniques are described for each approach.
| 0 |
Combining multiple highly-accurate independent parsers yields promising results.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
Jud ges A G G R ST M 1 M 2 M 3 T1 T2 T3 AG 0.7 0 0.7 0 0 . 4 3 0.4 2 0.6 0 0.6 0 0.6 2 0.5 9 GR 0.9 9 0 . 6 2 0.6 4 0.7 9 0.8 2 0.8 1 0.7 2 ST 0 . 6 4 0.6 7 0.8 0 0.8 4 0.8 2 0.7 4 M1 0.7 7 0.6 9 0.7 1 0.6 9 0.7 0 M2 0.7 2 0.7 3 0.7 1 0.7 0 M3 0.8 9 0.8 7 0.8 0 T1 0.8 8 0.8 2 T2 0.7 8 respectively, the recall and precision.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
The same argument holds for resolving PP attachment of a prefixed preposition or marking conjunction of elements of any kind.
|
The texts were annotated with the RSTtool.
| 0 |
Thus it is possible, for illustration, to look for a noun phrase (syntax tier) marked as topic (information structure tier) that is in a bridging relation (co-reference tier) to some other noun phrase.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
In this section we present a partial evaluation of the current system, in three parts.
|
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
| 0 |
Most of these groups follow a phrase-based statistical approach to machine translation.
|
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
| 0 |
The number of top-ranked pairs to retain is chosen to optimize dev-set BLEU score.
|
The texts were annotated with the RSTtool.
| 0 |
2.1 Part-of-speech tags.
|
These clusters are computed using an SVD variant without relying on transitional structure.
| 0 |
to represent the ith word type emitted by the HMM: P (t(i)|Ti, t(âi), w, α) â n P (w|Ti, t(âi), w(âi), α) (tb ,ta ) P (Ti, t(i)|T , W , t(âi), w, α, β) = P (T |tb, t(âi), α)P (ta|T , t(âi), α) âi (i) i i (âi) P (Ti|W , T âi, β)P (t |Ti, t , w, α) All terms are Dirichlet distributions and parameters can be analytically computed from counts in t(âi)where T âi denotes all type-level tag assignment ex cept Ti and t(âi) denotes all token-level tags except and w (âi) (Johnson, 2007).
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
The Wang, Li, and Chang system fails on fragment (b) because their system lacks the word youlyoul 'soberly' and misinterpreted the thus isolated first youl as being the final hanzi of the preceding name; similarly our system failed in fragment (h) since it is missing the abbreviation i:lJI!
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
ATB CTB6 Negra WSJ Trees 23449 28278 20602 43948 Word Typess 40972 45245 51272 46348 Tokens 738654 782541 355096 1046829 Tags 32 34 499 45 Phrasal Cats 22 26 325 27 Test OOV 16.8% 22.2% 30.5% 13.2% Per Sentence Table 4: Gross statistics for several different treebanks.
|
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
| 0 |
English was again paired with German, French, and Spanish.
|
The texts were annotated with the RSTtool.
| 0 |
Two aspects of the corpus have been presented in previous papers ((Re- itter, Stede 2003) on underspecified rhetorical structure; (Stede 2003) on the perspective of knowledge-based summarization).
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
This Good Turing estimate of p(unseen(f,) If,) can then be used in the normal way to define the probability of finding a novel instance of a construction in ir, in a text: p(unseen(f,)) = p(unseen(f,) I f,) p(fn Here p(ir,) is just the probability of any construction in ft as estimated from the frequency of such constructions in the corpus.
|
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
| 0 |
Morphological analyzers for Hebrew that analyze a surface form in isolation have been proposed by Segal (2000), Yona and Wintner (2005), and recently by the knowledge center for processing Hebrew (Itai et al., 2006).
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants.
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
While processing the source sentence monotonically, the initial state I is entered whenever there are no uncovered positions to the left of the rightmost covered position.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
As already pointed out in Section 2.4, current theories diverge not only on the number and definition of relations but also on apects of structure, i.e., whether a tree is sufficient as a representational device or general graphs are required (and if so, whether any restrictions can be placed on these graphâs structures â cf.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
A similar structure is used in speech recognition.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
(1998) did make use of information from the whole document.
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
(2010) consistently outperforms ours on English, we obtain substantial gains across other languages.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
Particles are uninflected.
|
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
| 0 |
If a phrase does not contain any keywords, the phrase is discarded.
|
They focused on phrases which two Named Entities, and proceed in two stages.
| 0 |
First, from a large corpus, we extract all the NE instance pairs.
|
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform.
| 0 |
English was again paired with German, French, and Spanish.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
For example, even if the contexts surrounding an anaphor and candidate match exactly, they are not coreferent if they have substantially different meanings 9 We would be happy to make our manually annotated test data available to others who also want to evaluate their coreference resolver on the MUC4 or Reuters collections.
|
The corpus was annoted with different linguitic information.
| 0 |
rently, some annotations (in particular the connectives and scopes) have already moved beyond the core corpus; the others will grow step by step.
|
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
| 0 |
Two aspects of the corpus have been presented in previous papers ((Re- itter, Stede 2003) on underspecified rhetorical structure; (Stede 2003) on the perspective of knowledge-based summarization).
|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
| 0 |
It also uses less memory, with 8 bytes of overhead per entry (we store 16-byte entries with m = 1.5); linked list implementations hash set and unordered require at least 8 bytes per entry for pointers.
|
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
| 0 |
The CoBoost algorithm just described is for the case where there are two labels: for the named entity task there are three labels, and in general it will be useful to generalize the CoBoost algorithm to the multiclass case.
|
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
| 0 |
The scoping heuristics are based on the anaphor type: for reflexive pronouns the scope is the current clause, for relative pronouns it is the prior clause following its VP, for personal pronouns it is the anaphorâs sentence and two preceding sentences, and for definite NPs it is the anaphorâs sentence and eight preceding sentences.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
This class-based model gives reasonable results: for six radical classes, Table 1 gives the estimated cost for an unseen hanzi in the class occurring as the second hanzi in a double GIVEN name.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
Clearly, explicitly modeling such a powerful constraint on tagging assignment has a potential to significantly improve the accuracy of an unsupervised part-of-speech tagger learned without a tagging dictionary.
|
The corpus was annoted with different linguitic information.
| 0 |
Preferences for constituent order (especially in languages with relatively free word order) often belong to this group.
|
This topic has been getting more attention, driven by the needs of various NLP applications.
| 0 |
Next, for each pair of NE categories, we collect all the contexts and find the keywords which are topical for that NE category pair.
|
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
| 0 |
Thus we opted not to take the step of creating more precise written annotation guidelines (as (Carlson, Marcu 2001) did for English), which would then allow for measuring inter-annotator agreement.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
This feature has a linguistic justification.
|
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
| 0 |
The normalized judgement per sentence is the raw judgement plus (0 minus average raw judgement for this judge on this sentence).
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
This approach is compared to another reordering scheme presented in (Berger et al., 1996).
|
This paper talks about KenLM: Faster and Smaller Language Model Queries.
| 0 |
Unlike Germann et al. (2009), we chose a model size so that all benchmarks fit comfortably in main memory.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
An anti-greedy algorithm, AG: instead of the longest match, take the.
|
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
| 0 |
In fact, it is very difficult to maintain consistent standards, on what (say) an adequacy judgement of 3 means even for a specific language pair.
|
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
| 0 |
We aligned the texts at a sentence level across all four languages, resulting in 1064 sentence per language.
|
This paper presents methods to query N-gram language models, minimizing time and space costs.
| 0 |
Figure 2 shows timing results.
|
These clusters are computed using an SVD variant without relying on transitional structure.
| 0 |
In contrast, NNP (proper nouns) form a large portion of vocabulary.
|
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
| 0 |
We obtained positive results using a very simple phrase-based system in two different adaptation settings: using English/French Europarl to improve a performance on a small, specialized medical domain; and using non-news portions of the NIST09 training material to improve performance on the news-related corpora.
|
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
| 0 |
Call the crossing constituents A and B.
|
This paper presents a maximum entropy-based named entity recognizer (NER).
| 0 |
At most one feature in this group will be set to 1.
|
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
| 0 |
A summary of the corpus used in the experiments is given in Table 3.
|
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
| 0 |
MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.
|
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
| 0 |
We use graph-based label propagation for cross-lingual knowledge transfer and use the projected labels as features in an unsupervised model (Berg- Kirkpatrick et al., 2010).
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
Hash tables are a common sparse mapping technique used by SRILM’s default and BerkeleyLM’s hashed variant.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.