source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Here we present two algorithms. | 0 | In a fully supervised setting, the task is to learn a function f such that for all i = 1...m, f (xi,i, 12,i) = yz. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | 2. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | TIS systems in general need to do more than simply compute the. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Each visited entry wni stores backoff b(wni ). |
Here we present two algorithms. | 0 | The following features were used: full-string=x The full string (e.g., for Maury Cooper, full- s tring=Maury_Cooper). contains(x) If the spelling contains more than one word, this feature applies for any words that the string contains (e.g., Maury Cooper contributes two such features, contains (Maury) and contains (Cooper) . allcapl This feature appears if the spelling is a single word which is all capitals (e.g., IBM would contribute this feature). allcap2 This feature appears if the spelling is a single word which is all capitals or full periods, and contains at least one period. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Under this scheme, n human judges are asked independently to segment a text. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | It is closer to the smaller value of precision and recall when there is a large skew in their values. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | The type of alignment we have considered so far requires the same length for source and target sentence, i.e. I = J. Evidently, this is an unrealistic assumption, therefore we extend the concept of inverted alignments as follows: When adding a new position to the coverage set C, we might generate either à = 0 or à = 1 new target words. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | BABAR achieved recall in the 4250% range for both domains, with 76% precision overall for terrorism and 87% precision for natural disasters. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Section 4.1 explained that state s is stored by applications with partial hypotheses to determine when they can be recombined. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | This is can not be the only explanation, since the discrepancy still holds, for instance, for out-of-domain French-English, where Systran receives among the best adequacy and fluency scores, but a worse BLEU score than all but one statistical system. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | In Section 3, we introduce our novel concept to word reordering and a DP-based search, which is especially suitable for the translation direction from German to English. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | To evaluate proper-name identification, we randomly se lected 186 sentences containing 12,000 hanzi from our test corpus and segmented the text automatically, tagging personal names; note that for names, there is always a sin gle unambiguous answer, unlike the more general question of which segmentation is correct. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | Step 4. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | We used the TF/ITF metric to identify keywords. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | This has solutions: where pI(s|t) is derived from the IN corpus using relative-frequency estimates, and po(s|t) is an instance-weighted model derived from the OUT corpus. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Unfortunately, we were unable to correctly run the IRSTLM quantized variant. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | This is a straightforward technique that is arguably better suited to the adaptation task than the standard method of treating representative IN sentences as queries, then pooling the match results. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | As a first step towards modeling transliterated names, we have collected all hanzi occurring more than once in the roughly 750 foreign names in our dictionary, and we estimate the probabil ity of occurrence of each hanzi in a transliteration (pTN(hanzi;)) using the maximum likelihood estimate. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | A different but supplementary perspective on discourse-based information structure is taken 11ventionalized patterns (e.g., order of informa by one of our partner projects, which is inter tion in news reports). |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | This is because different judges focused on different language pairs. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | HR0011-06-C-0022. |
The corpus was annoted with different linguitic information. | 0 | We respond to this on the one hand with a format for its underspecification (see 2.4) and on the other hand with an additional level of annotation that attends only to connectives and their scopes (see 2.5), which is intended as an intermediate step on the long road towards a systematic and objective treatment of rhetorical structure. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | We utilized two kinds of datasets in our experiments: (i) monolingual treebanks9 and (ii) large amounts of parallel text with English on one side. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Table 5: Individual Performance of KSs for Disasters (e.g., âthe mayorâ vs. âthe journalistâ). |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | The tokens w are generated by token-level tags t from an HMM parameterized by the lexicon structure. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | However, our full model takes advantage of word features not present in Grac¸a et al. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | The same argument holds for resolving PP attachment of a prefixed preposition or marking conjunction of elements of any kind. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | shortest match at each point. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | Hyperparameters Our model has two Dirichlet concentration hyperparameters: α is the shared hyperparameter for the token-level HMM emission and transition distributions. |
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers. | 0 | It is sometimes claimed that one of the advantages of dependency grammar over approaches based on constituency is that it allows a more adequate treatment of languages with variable word order, where discontinuous syntactic constructions are more common than in languages like English (Mel’ˇcuk, 1988; Covington, 1990). |
The corpus was annoted with different linguitic information. | 0 | (Webber et al., 2003)). |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | Clearly, explicitly modeling such a powerful constraint on tagging assignment has a potential to significantly improve the accuracy of an unsupervised part-of-speech tagger learned without a tagging dictionary. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | The work of Rounds (1969) shows that the path sets of trees derived by IG's (like those of TAG's) are context-free languages. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | What both of these approaches presume is that there is a sin gle correct segmentation for a sentence, against which an automatic algorithm can be compared. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | In all figures, we present the per-sentence normalized judgements. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Ex: The government said it ... |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | Now assume we have n pairs (xi,, x2,i) drawn from X1 X X2, where the first m pairs have labels whereas for i = m+ 1...n the pairs are unlabeled. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | our full model yields 39.3% average error reduction across languages when compared to the basic configuration (1TW). |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | In Modern Hebrew (Hebrew), a Semitic language with very rich morphology, particles marking conjunctions, prepositions, complementizers and relativizers are bound elements prefixed to the word (Glinert, 1989). |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | In German, the verbgroup usually consists of a left and a right verbal brace, whereas in English the words of the verbgroup usually form a sequence of consecutive words. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | We define a symmetric similarity function K(uZ7 uj) over two foreign language vertices uZ7 uj E Vf based on the co-occurrence statistics of the nine feature concepts given in Table 1. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | We can give a tree pumping lemma for TAG's by adapting the uvwxy-theorem for CFL's since the tree sets of TAG's have independent and context-free paths. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | An easy way to achieve this is to put the domain-specific LMs and TMs into the top-level log-linear model and learn optimal weights with MERT (Och, 2003). |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | The use of the Good-Turing equation presumes suitable estimates of the unknown expectations it requires. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | Ex: The regime gives itself the right... |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 32 81. |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | Clearly, retaining the original frequencies is important for good performance, and globally smoothing the final weighted frequencies is crucial. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Evaluation of Morphological Analysis. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | 7 68.3 56. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | For each set, the phrases with bracketed frequencies are considered not paraphrases in the set. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | In defining LCFRS's, we hope to generalize the definition of CFG's to formalisms manipulating any structure, e.g. strings, trees, or graphs. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | Intuitively, it places more weight on OUT when less evidence from IN is available. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | On each language we investigate the contribution of each component of our model. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 3. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | In the named entity domain these rules were Each of these rules was given a strength of 0.9999. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | 4 69.0 51. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | We thank United Informatics for providing us with our corpus of Chinese text, and BDC for the 'Behavior ChineseEnglish Electronic Dictionary.' |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | However, we needed to restrict ourselves to these languages in order to be able to evaluate the performance of our approach. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Later, BerkeleyLM (Pauls and Klein, 2011) described ideas similar to ours. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | rently, some annotations (in particular the connectives and scopes) have already moved beyond the core corpus; the others will grow step by step. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Memory usage is likely much lower than ours. fThe original paper (Germann et al., 2009) provided only 2s of query timing and compared with SRI when it exceeded available RAM. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Note that the sets of possible classifiers for a given noun can easily be encoded on that noun by grammatical features, which can be referred to by finite-state grammatical rules. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | In that work, mutual information was used to decide whether to group adjacent hanzi into two-hanzi words. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | AdaBoost.MH maintains a distribution over instances and labels; in addition, each weak-hypothesis outputs a confidence vector with one confidence value for each possible label. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | The tree denoting this derivation of 7 is rooted with a node labeled 7' having k subtrees for the derivations of 71, ... ,7a. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | We use two different similarity functions to define the edge weights among the foreign vertices and between vertices from different languages. |
Here we present two algorithms. | 0 | N, portion of examples on which both classifiers give a label rather than abstaining), and the proportion of these examples on which the two classifiers agree. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | These knowledge sources determine whether the contexts surrounding an anaphor and antecedent are compatible. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | The first step in the learning process is to generate training examples consisting of anaphor/antecedent resolutions. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | The code is opensource, has minimal dependencies, and offers both C++ and Java interfaces for integration. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Interpolation search has a more expensive pivot but performs less pivoting and reads, so it is slow on small data and faster on large data. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | We assumed that such a contrastive assessment would be beneficial for an evaluation that essentially pits different systems against each other. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | The average fluency judgement per judge ranged from 2.33 to 3.67, the average adequacy judgement ranged from 2.56 to 4.13. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | (7) φ s,t This is a somewhat less direct objective than used by Matsoukas et al, who make an iterative approximation to expected TER. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | While we had up to 11 submissions for a translation direction, we did decide against presenting all 11 system outputs to the human judge. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Step 1. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | The development of a naïve Bayes classifier involves learning how much each parser should be trusted for the decisions it makes. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | The extent to which this constraint is enforced varies greatly across existing methods. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM). |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | 3 The Coreference Resolution Model. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | In principle, it would be possible to encode the exact position of the syntactic head in the label of the arc from the linear head, but this would give a potentially infinite set of arc labels and would make the training of the parser very hard. |
This corpus has several advantages: it is annotated at different levels. | 0 | Thus we are interested not in extraction, but actual generation from representations that may be developed to different degrees of granularity. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | The use of global features has improved the performance on MUC6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC7 test data from 85.22% to 87.24% (14% reduction in errors). |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | Our original hope in combining these parsers is that their errors are independently distributed. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | For all variants, we found that BerkeleyLM always rounds the floating-point mantissa to 12 bits then stores indices to unique rounded floats. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | Because many foreign word types are not aligned to an English word (see Table 3), and we do not run label propagation on the foreign side, we expect the projected information to have less coverage. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | In a fully supervised setting, the task is to learn a function f such that for all i = 1...m, f (xi,i, 12,i) = yz. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | The availability of comparable corpora is limited, which is a significant limitation on the approach. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Not surprisingly some semantic classes are better for names than others: in our corpora, many names are picked from the GRASS class but very few from the SICKNESS class. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | The algorithm works due to the fact that not all permutations of cities have to be considered explicitly. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | The same numbers were used for each data structure. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | As a result, Habash et al. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | In this paper, we will propose an unsupervised method to discover paraphrases from a large untagged corpus. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | However, those methods need initial seeds, so the relation between entities has to be known in advance. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | The combining algorithm is presented with the candidate parses and asked to choose which one is best. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 1 | We propose an unsupervised method to discover paraphrases from a large untagged corpus, without requiring any seed phrase or other cue. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | Ablation Analysis We evaluate the impact of incorporating various linguistic features into our model in Table 3. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | We refer to (T , W ) as the lexicon of a language and Ï for the parameters for their generation; Ï depends on a single hyperparameter β. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | For the inverted alignment probability p(bijbiô1; I; J), we drop the dependence on the target sentence length I. 2.2 Word Joining. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.