source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | 2.2.1 The Caseframe Representation Information extraction (IE) systems use extraction patterns to identify noun phrases that play a specific role in 1 Our implementation only resolves NPs that occur in the same document, but in retrospect, one could probably resolve instances of the same existential NP in different documents too. |
This corpus has several advantages: it is annotated at different levels. | 0 | At present, the âPotsdam Commentary Corpusâ (henceforth âPCCâ for short) consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | JI! |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | In Section 6 we dis cuss other issues relating to how higher-order language models could be incorporated into the model. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | This decreases the statistical significance of our results compared to those studies. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | In speech recognition the arcs of the lattice are typically weighted in order to indicate the probability of specific transitions. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | In addition to the Europarl test set, we also collected 29 editorials from the Project Syndicate website2, which are published in all the four languages of the shared task. |
There is no global pruning. | 0 | The type of alignment we have considered so far requires the same length for source and target sentence, i.e. I = J. Evidently, this is an unrealistic assumption, therefore we extend the concept of inverted alignments as follows: When adding a new position to the coverage set C, we might generate either à = 0 or à = 1 new target words. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | None of the parsers produce parses with crossing brackets, so none of them votes for both of the assumed constituents. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Ex: Mr. Cristiani is the president ... |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Mandarin exhibits several such processes, including A-not-A question formation, il lustrated in (3a), and adverbial reduplication, illustrated in (3b): 3. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | This overview is illustrated in Figure 1. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | In all of our experiments, the binary file (whether mapped or, in the case of most other packages, interpreted) is loaded into the disk cache in advance so that lazy mapping will never fault to disk. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | The resulting algorithm is depicted in Table 1. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Finally, the concatenated 5 * 20% output is used to train the reference resolution component. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | They are set to fixed constants. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | They demonstrated this with the comparison of statistical systems against (a) manually post-edited MT output, and (b) a rule-based commercial system. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | All the links in the âCC-domain are shown in Step 4 in subsection 3.2. |
Their results show that their high performance NER use less training data than other systems. | 0 | Another attempt at using global information can be found in (Borthwick, 1999). |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | A few annotators suggested to break up long sentences into clauses and evaluate these separately. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | The results of the manual and automatic evaluation of the participating system translations is detailed in the figures at the end of this paper. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | We evaluate the time and memory consumption of each data structure by computing perplexity on 4 billion tokens from the English Gigaword corpus (Parker et al., 2009). |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | We have provided methods for handling certain classes of unknown words, and models for other classes could be provided, as we have noted. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Searching a probing hash table consists of hashing the key, indexing the corresponding bucket, and scanning buckets until a matching key is found or an empty bucket is encountered, in which case the key does not exist in the table. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Saving state allows our code to walk the data structure exactly once per query. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | (2010) consistently outperforms ours on English, we obtain substantial gains across other languages. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Examples will usually be accompanied by a translation, plus a morpheme-by-morpheme gloss given in parentheses whenever the translation does not adequately serve this purpose. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | During development, we sensed that the Recency and Syn- role KSs did not deserve to be on equal footing with the other KSs because their knowledge was so general. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | yu2 'fish.' |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | constitute names, since we have only their segmentation, not the actual classification of the segmented words. |
There is no global pruning. | 0 | When aligning the words in parallel texts (for language pairs like SpanishEnglish, French-English, ItalianGerman,...), we typically observe a strong localization effect. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Two sets of examples from Gan are given in (1) and (2) (:::::: Gan's Appendix B, exx. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | The sign test checks, how likely a sample of better and worse BLEU scores would have been generated by two systems of equal performance. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | The prediction based on these features is a knearest neighbor classification, using the IB1 algorithm and k = 5, the modified value difference metric (MVDM) and class voting with inverse distance weighting, as implemented in the TiMBL software package (Daelemans et al., 2003). |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | Lemma: If the number of votes required by constituent voting is greater than half of the parsers under consideration the resulting structure has no crossing constituents. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | At first glance, this seems only peripherally related to our work, since the specific/general distinction is made for features rather than instances. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | These systems rely on a training corpus that has been manually annotated with coreference links. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | Instead of offsetting new topics with punctuation, writers of MSA in sert connectives such as � wa and � fa to link new elements to both preceding clauses and the text as a whole. |
Here we present two algorithms. | 0 | The problem of "noise" items that do not fall into any of the three categories also needs to be addressed. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | We used these three parsers to explore parser combination techniques. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | For example, in the phrase âCompany-A last week purchased rival Marshalls from Company-Bâ, the purchased company is Marshalls, not Company-B. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | In considering this aspect of a formalism, we hope to better understand the relationship between the structural descriptions generated by the grammars of a formalism, and the properties of semilinearity and polynomial recognizability. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | We use a simple TF/IDF method to measure the topicality of words. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | It was filtered to retain the top 30 translations for each source phrase using the TM part of the current log-linear model. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | All systems (except for Systran, which was not tuned to Europarl) did considerably worse on outof-domain training data. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Further, it needs extra pointers in the trie, increasing model size by 40%. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | For each trigram type x2 x3 x4 in a sequence x1 x2 x3 x4 x5, we count how many times that trigram type co-occurs with the different instantiations of each concept, and compute the point-wise mutual information (PMI) between the two.5 The similarity between two trigram types is given by summing over the PMI values over feature instantiations that they have in common. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | For example: McCann initiated a new global system. |
Here we present two algorithms. | 0 | There has been additional recent work on inducing lexicons or other knowledge sources from large corpora. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Our System Wang, Li, and Chang a. 1\!f!IP Eflltii /1\!f!J:P $1til I b. agm: I a m: c. 5 Bf is Bf 1 d. "*:t: w _t ff 1 "* :t: w_tff 1 g., , Transliteration/Translation chen2zhongl-shenl qu3 'music by Chen Zhongshen ' huang2rong2 youlyoul de dao4 'Huang Rong said soberly' zhangl qun2 Zhang Qun xian4zhang3 you2qingl shang4ren2 hou4 'after the county president You Qing had assumed the position' lin2 quan2 'Lin Quan' wang2jian4 'Wang Jian' oulyang2-ke4 'Ouyang Ke' yinl qi2 bu4 ke2neng2 rong2xu3 tai2du2 er2 'because it cannot permit Taiwan Independence so' silfa3-yuan4zhang3 lin2yang2-gang3 'president of the Judicial Yuan, Lin Yanggang' lin2zhangl-hu2 jiangl zuo4 xian4chang3 jie3shuol 'Lin Zhanghu will give an ex planation live' jin4/iang3 nian2 nei4 sa3 xia4 de jinlqian2 hui4 ting2zhi3 'in two years the distributed money will stop' gaoltangl da4chi2 ye1zi0 fen3 'chicken stock, a tablespoon of coconut flakes' you2qingl ru4zhu3 xian4fu3 lwu4 'after You Qing headed the county government' Table 5 Performance on morphological analysis. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | 6 Results and Analysis. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | For English POS tagging, BergKirkpatrick et al. (2010) found that this direct gradient method performed better (>7% absolute accuracy) than using a feature-enhanced modification of the Expectation-Maximization (EM) algorithm (Dempster et al., 1977).8 Moreover, this route of optimization outperformed a vanilla HMM trained with EM by 12%. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | Keywords with more than one word In the evaluation, we explained that âchairmanâ and âvice chairmanâ are considered paraphrases. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | The main disadvantage of manual evaluation is that it is time-consuming and thus too expensive to do frequently. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | Recently, this topic has been getting more attention, as is evident from the Paraphrase Workshops in 2003 and 2004, driven by the needs of various NLP applications. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Global features are extracted from other occurrences of the same token in the whole document. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | Both parametric and non-parametric models are explored. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | The results in Table 2 were achieved on the development set. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | However, in practice, unknown word models also make the distribution improper. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | In our grammar, features are realized as annotations to basic category labels. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | It then computes a normalized Levenshtein edit distance between the extracted chain and the reference. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Alon Lavie advised on this work. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Since guess and gold trees may now have different yields, the question of evaluation is complex. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | The results are given in Table 4. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | Acknowledgments We thank Steven Bethard, Evan Rosen, and Karen Shiells for material contributions to this work. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | 'Malaysia.' |
There is no global pruning. | 0 | A position is presented by the word at that position. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Motivated by these questions, we significantly raise baselines for three existing parsing models through better grammar engineering. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | The 4th block contains instance-weighting models trained on all features, used within a MAP TM combination, and with a linear LM mixture. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | 4. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | 2.1 Overview. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | This suggests that the backoff model is as reasonable a model as we can use in the absence of further information about the expected cost of a plural form. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | One of such approaches uses comparable documents, which are sets of documents whose content are found/known to be almost the same, such as different newspaper stories about the same event [Shinyama and Sekine 03] or different translations of the same story [Barzilay 01]. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | Let s = a + b. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Lack of correct reference translations was pointed out as a short-coming of our evaluation. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | Indeed there are several open issues. |
The AdaBoost algorithm was developed for supervised learning. | 0 | Zt can be written as follows Following the derivation of Schapire and Singer, providing that W+ > W_, Equ. |
This assumption, however, is not inherent to type-based tagging models. | 0 | to represent the ith word type emitted by the HMM: P (t(i)|Ti, t(âi), w, α) â n P (w|Ti, t(âi), w(âi), α) (tb ,ta ) P (Ti, t(i)|T , W , t(âi), w, α, β) = P (T |tb, t(âi), α)P (ta|T , t(âi), α) âi (i) i i (âi) P (Ti|W , T âi, β)P (t |Ti, t , w, α) All terms are Dirichlet distributions and parameters can be analytically computed from counts in t(âi)where T âi denotes all type-level tag assignment ex cept Ti and t(âi) denotes all token-level tags except and w (âi) (Johnson, 2007). |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | This is a standard adaptation problem for SMT. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | The samples from each corpus were independently evaluated. |
This assumption, however, is not inherent to type-based tagging models. | 0 | β is the shared hyperparameter for the tag assignment prior and word feature multinomials. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | For example, suppose the current model assigns a belief value of .60 to {A, B}, meaning that it is 60% sure that the correct hypothesis is either A or B. Then new evidence arrives with a belief value of .70 assigned 5 Initially there are no competing hypotheses because all hypotheses are included in θ by definition. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | First, we will describe their method and compare it with our method. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Maamouri et al. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | Hyperparameters Our model has two Dirichlet concentration hyperparameters: α is the shared hyperparameter for the token-level HMM emission and transition distributions. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | The table makes clear that enriching our grammar improves the syntactic performance as well as morphological disambiguation (segmentation and POS tagging) accuracy. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | This best instance-weighting model beats the equivalant model without instance weights by between 0.6 BLEU and 1.8 BLEU, and beats the log-linear baseline by a large margin. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | In the denomi 11 We have two such lists, one containing about 17,000 full names, and another containing frequencies of. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | 11 taTweel (-) is an elongation character used in Arabic script to justify text. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | Finally, model GTv = 2 includes parent annotation on top of the various state-splits, as is done also in (Tsarfaty and Sima’an, 2007; Cohen and Smith, 2007). |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | In this paper, we have presented a new, eÃcient DP-based search procedure for statistical machine translation. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | 19 We note that it is not always clear in Wang, Li, and Chang's examples which segmented words. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | Word Re-ordering and DP-based Search in Statistical Machine Translation |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | Zt can be written as follows Following the derivation of Schapire and Singer, providing that W+ > W_, Equ. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | âThe gunâ will be extracted by the caseframe âfired <patient>â. |
All the texts were annotated by two people. | 0 | What ought to be developed now is an annotation tool that can make use of the format, allow for underspecified annotations and visualize them accordingly. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | This best instance-weighting model beats the equivalant model without instance weights by between 0.6 BLEU and 1.8 BLEU, and beats the log-linear baseline by a large margin. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | We used the MUC4 terrorism corpus (MUC4 Proceedings, 1992) and news articles from the Reuterâs text collection8 that had a subject code corresponding to natural disasters. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Different annotations of the same text are mapped into the same data structure, so that search queries can be formulated across annotation levels. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.