id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_92700 | Like in HMM, it is the notion of hidden states that facilitates "summarizing" distributed information and finding the global optimum. | 3 Tree Transfer as a Task for HMTM HMTM Assumptions from the MT Viewpoint. | neutral |
train_92701 | So, one solution to this unbalanced class distribution is to split the 'nonemotion' (emo_ntrl) class into several subclasses. | each feature value is boolean in nature, with discrete value for intensity feature at the word level. | neutral |
train_92702 | Thus, when performing a task such as Question Answering (QA), many new aspects have to be taken into consideration. | in Table 3, the OQA missed some of the answers due to erroneous sentence splitting, either separating text into two sentences where it was not the case or concatenating two consecutive sentences; thus missing out on one of two consecutively annotated answers. | neutral |
train_92703 | Furthermore,questions 15,16,18,19 and 20 contain both factual as well as opinion aspects and the OQA system performed better than the TQA, although in some cases, answers were lost due to the artificial boosting of the queries containing NEs of the EAT (Expected Answer Type). | we take into account the first 1, 5, 10 and 50 answers. | neutral |
train_92704 | Denote A i as the multilabel in Y i that corresponds to the nonleaf categories and Sib(z) denotes the sibling nodes of z, that is the set of nodes that have the same parent with z, except z itself. | there are different kinds of loss functions l(y,ȳ). | neutral |
train_92705 | Λ(y) is the feature representation of y. | hierarchical categorization methods can be divided in two types: local and global approaches (Wang et al., 1999;Sun and Lim, 2001). | neutral |
train_92706 | Inspired by the example in Figure 1, we emphasize it is also important to separate the ancestor node in the correct path from their sibling node. | we can classify a document x to label y : where F (•) is a map function. | neutral |
train_92707 | We compare a baseline relying on word frequency measures with one combining word frequency with shallow linguistic features. | due to the sparseness of tagged instances, words that occur with a very high frequency in the corpus automatically receive a lower score than low-frequent words. | neutral |
train_92708 | After clustering, we can get a specification tree resembles the one in subsection 2.1. | the proposed approach is based only on product specifications. | neutral |
train_92709 | The root is the product itself. | when the features in a sentence are correctly recognized, Words describing those features are likely to be identified by our methods. | neutral |
train_92710 | We consider the meanings of words per the above example; we will recognize "có", a meaning word in our meaning word set, which reflects a possessive relationship between "Máy tính" and "dung l ng RAM l n nh t". | if c is a concept, its key phrases, its title name, its redirect name and its category are all added as entries in the ViDic. | neutral |
train_92711 | If so, then how much does our classifier help compared to querying random translations? | in this fashion iDiOM searches for images that are relevant to the intended concept as opposed to using a possibly ambiguous query. | neutral |
train_92712 | Table 2 shows the break-down of the distribution of sentence types. | the association language patterns can capture word relationships in sentences, thus yielding higher performance than the baseline system using single words alone. | neutral |
train_92713 | Instead, they are usually composed of the words with long-distance dependencies, which cannot be easily captured by n-grams. | an association language pattern is defined herein as a combination of multiple associated words, denoted by the task of association pattern mining is to mine the language patterns of frequently associated words from the training sentences. | neutral |
train_92714 | Our method (travel blog) was much better than the generic blog method, which indicates that travel blogs are a useful information source for the extraction of travel information. | we used the value of k=4, which was determined in a pilot study. | neutral |
train_92715 | Among the 50 errors, 5 entries (10%) were too short (fewer than four sentences) to be identified by our method. | it is costly and time consuming to compile travel information for all tourist spots and to keep them up to date manually. | neutral |
train_92716 | We present the game rules design, the preparation of the game documents and the evaluation of the players' score. | this fact has encouraged a formulation of an alternative way of data collection, "Games With a Purpose" methodology (GWAP), (van Ahn and Dabbish, 2008). | neutral |
train_92717 | In TCE_DI, term delimiters are identified first. | in this work, a new feature on the relevance between different term candidates is integrated with other features to validate their domain specificity. | neutral |
train_92718 | Even if the translations from the source and target languages are semantically transferred to the intermediate language, lexically it is rarely the case. | we consider each such source-target pair a translation candidate. | neutral |
train_92719 | The similarity of the two expanded pivot language descriptions gives a better indication on the suitability of the translation candidate. | the recall value of a manually created Japanese-English dictionary is higher than any automatically generated dictionary's value ( Table We evaluated 2000 randomly selected translation pairs, manually scoring them as correct (the translation conveys the same meaning, or the meanings are slightly different, but in a certain context the translation is possible: 79.15%), undecided (the translation pair's semantic value is similar, but a translation based on them would be faulty: 6.15%) or wrong (the translation pair's two entries convey a different meaning: 14.70%). | neutral |
train_92720 | The speed of this step is well over 6K tokens/hour. | in addition, the PATB part 1, part 2 and part 3 data is automatically converted into CATiB representation. | neutral |
train_92721 | The document-level pre-filtering reduces the overall processing time by about 40 % (from 4 to 2.5 days on a 100-CPU cluster). | the search is initialized by adding a single hypothesis for each target sentence t m ∈ Θ to the stack for j = 1: During the left-to-right search , state transitions of the following type occur: where the partial score is updated as: Here, f (1, j, i) is a partial feature vector computed for all the additional source and target positions processed in the last extension step. | neutral |
train_92722 | For English sentence preprocessing, we use the Stanford parser with output of typed dependency relations. | this null alignment ratio is relatively high in comparison to the French-English alignment, in which about 9% of French sentences and 6% of English sentences are not aligned. | neutral |
train_92723 | The definition of the compositional property does not allow re-ordering. | many phrase pairs in the phrase table have joint and both marginal frequencies all equal to 1. | neutral |
train_92724 | On both the reordering classification and a Chinese-to-English translation task, we show improved performance over a baseline SMT system. | we used GIZA++ to produce alignments, enabling us to compare using a DPR model against a baseline lexicalized reordering model (Koehn et al., 2005) that uses MLE orientation prediction and a discriminative model (Zens and Ney, 2006) that utilizes an ME framework. | neutral |
train_92725 | We will also apply DPR on a larger data set to test its performance as well as its time efficiency. | the training times for ME models are usually relatively high, especially when the output classes (i.e. | neutral |
train_92726 | Belkin and Goldsmith (2002) applied spectral analysis to understand the struture of morpho-syntactic networks of English words. | this work is restricted to the study of English DSNs only 1 . | neutral |
train_92727 | Semantic DSN: The construction of this network is inspired by (Lin, 1998). | usually, the co-occurrence patterns with respect to the function words are used to define the syntactic context, whereas that with respect to the content words define the semantic context. | neutral |
train_92728 | A paraphrase generation tool usually starts with a sentence which may be very similar to some potential solution. | this method, based on large graph exploration by Monte-Carlo sampling, produces results comparable with state-of-the-art paraphrase generation tools based on SMt decoders. | neutral |
train_92729 | The total cost can be represented as: where l f is the cost to code the index of the feature and l θ is the number of bits required to code the coefficient of the selected feature. | we used "foreground/background" clustering to find cohesive clusters for various word senses in the ONTONOTES data, considering both "semantic" and "syntactic" similarity between the word senses. | neutral |
train_92730 | Similarly, other meanings of "dismiss" (as in "dismiss an idea") should not share features with "fire". | ℓ 0 penalty methods, like stepwise feature selection, give approximate solutions but produce models that are much sparser than the models given by ℓ 1 methods, which is quite crucial in WSD (Florian and Yarowsky, 2002). | neutral |
train_92731 | Experimentation shows that the Bayesian decision framework is superior to the use of likelihoods for segmentation. | there are also a large number of chain continuations across the utterance boundary, which implies that a story boundary is less likely. | neutral |
train_92732 | Pitch accents are characterized not only by their absolute pitch height, but also by contrast with neighboring syllables. | in addition, we aim to compare the impact of different sources of training data. | neutral |
train_92733 | We experimented with four different language models. | we described the first stochastic morphological parser for Turkish and gave two applications. | neutral |
train_92734 | The binary plus ⊕ and the times ⊗ operations of the counting semiring are defined as the sum of the weight vectors. | the turkish alphabet includes six special letters (ç,g, ı,ö, ş,ü) that do not exist in English. | neutral |
train_92735 | One can use a stochastic morphological parser to do spelling checking and correction, and present spelling suggestions ranked with the parser output probabilities. | the n th value of the vector in the counting semiring just counts the appearances of the n th arc of mt in a path. | neutral |
train_92736 | In the model used in this work, the most common completions are NN, NNS, and NNP. | see Figure 3 for a simple visual example of how this works. | neutral |
train_92737 | For clarity, we define the local factors as: Note that we can ignore the subscript t at Ψ 2 t (y t−1 = j, y t = i) by defining an HMM-like model, that is, transition matrix Ψ 2 j,i is independent of t. As exact inference, we use the forward-backward procedure to calculate marginals . | time complexity is O(t |Y| 2 ) for exact inference (i.e., forward-backward and Viterbi algorithm) of linear-chain CRFs (Lafferty et al., 2001). | neutral |
train_92738 | linear-chain), an exact inference can be obtained efficiently if the number of output labels is not large. | this scheme uses only supported features that are used at least once in the training examples. | neutral |
train_92739 | Second row: time versus testing accuracy/F1. | ,we can minimizeA t (z t ),∀t simultaneously, and then update w t ∀t together. | neutral |
train_92740 | 7) For each candidate particle calculate the fitness function. | table 1 shows that, in all datasets, accuracy improved up to 9% by optimizing the cost of each edit operation. | neutral |
train_92741 | The acceptance probability then just depends on the graph distances. | see Figure 5 (this is just for the auth graph). | neutral |
train_92742 | (2) Another limitation is that the sentence scores calculated from existing methods usually do not have very clear and rigorous probabilistic interpretations. | we have Here we use parameter U st for the probability of choosing base model s given topic t, p(S i = s|T i = t) = U st , where s U st = 1. | neutral |
train_92743 | In the GIVE scenario (Byron et al., 2009), users try to solve a treasure hunt in a virtual 3D world that they have not seen before. | to the online experiment, 31% of participants were male and 65% were female (4% did not specify their gender). | neutral |
train_92744 | Integrated NLG systems have a simpler architecture because they do not need to model interactions between modules. | in this case the realizer produced only 16 solutions, all of which maintained referential coherence. | neutral |
train_92745 | In the second case, we used a grammar with the same trees, but annotated with discourse referents. | they still face the problem of computational complexity that was originally solved by the pipeline model. | neutral |
train_92746 | Online product reviews are a crucial source of opinions about a product, coming from the people who have experienced it first-hand. | any other sentence in a review that does not fit the above definition of an opinion sentence is considered as a non-opinion sentence. | neutral |
train_92747 | The set of dependency relations is specific to a given parser -we use the Stanford parser 1 for computing dependency relations. | in such a situation, one critical issue might be the sparseness of the very specific linguistic features, which may cause the classifier learned from such features to not generalize. | neutral |
train_92748 | A study of the inter-annotator agreement between two human annotators has been performed on a set of 100 questions. | (Table 1), assuming the system only tagged operas as AP, lenient accuracy will be 1, exact accuracy will be 0, precision for the AskingPoint class will be 1 and its recall will be 0.5. | neutral |
train_92749 | We consider that AP takes precedence over the EAT. | the distribution of AP classes in the annotated data is shown in the table 2. | neutral |
train_92750 | For each abstract sentence, we assign a score to every document sentence as the sum of its filtered BE scores divided by the number of BEs in the sentence. | the use of kernel is based on the accuracy we achieved during training. | neutral |
train_92751 | For supervised learning methods, huge amount of annotated or labeled data sets are obviously required as a precondition. | every abstract sentence contributes to the BE score of each document sentence and we select the top N sentences based on average BE scores to have the label +1 and the rest to have the label −1. | neutral |
train_92752 | If a clause does not have any POS tag that can serve as a main verb (VB, VBD, VBP, VBZ), it is marked as missing a main verb. | our approach uses a regular grammar and alignment information to detect missing verbs and draws from examples in documents determined to be relevant to the query to insert a new verb translation. | neutral |
train_92753 | Under this assumption, the translation of "被捕" in the above example should be placed in the position between "Saddam" and ".". | in practice, more than one VTG may be found in a clause. | neutral |
train_92754 | Simard et al in 2007 even developed a statistical phrase based MT system in a postediting task, which takes the output of a rulebased MT system and produces post-edited target-language text. | readers could better determine if the modified sentence better captured the meaning of the source sentence. | neutral |
train_92755 | First, we rewrite the summation in (3) as a difference of fractional harmonic numbers: 2 Using the recurrence for harmonic numbers: We then use the asymptotic expansion, H F ≈ log F + γ + 1 2F , omiting trailing terms which are O(F −2 ) and smaller powers of F : 3 Omitting the trailing term leads to the approximation in Antoniak (1974). | due to a misinterpretation of Antoniak (1974), GGJ06 use an approximation that leaves out all the P 1 (w) terms from (4). | neutral |
train_92756 | There has been an increase in available N -gram data and a large amount of web-scaled N -gram data has been successfully deployed in statistical machine translation. | figure 2(a) shows an example trie structure. | neutral |
train_92757 | The succeeding (n+1)-grams are stored in a contiguous region and sorted by the word id of w n+1 . | lossless representation of N -gram is a key issue even for lossy approaches. | neutral |
train_92758 | We find that splitting words into their stem and suffix components using a morphological analyzer and disambiguator results in significant perplexity reductions of up to 27%. | the logprobability spent on the zero suffixes in the split+0 dataset has to be spent on trying to decide whether to include a stem or suffix following a stem in the split dataset. | neutral |
train_92759 | We note that, in the case of the former experiments, the size of the unlabeled examples is increasing in the direction 98a to 91a. | if we use unlabeled data within the epoch of the test set, we hardly see a degradation trend as the time gap between the epochs of seeds and test set is increased. | neutral |
train_92760 | This can be achieved using MapReduce, a distributed computing framework (Dean, 2004) Elsayed et al. | here we describe some results from a system we built to perform this task on Arabic documents. | neutral |
train_92761 | For example, in a "Conflict-Attack" event, "Attacker" and "Target" are more important than "Person" to indicate the event time. | various situations are evolving, updated, repeated and corrected in different event mentions. | neutral |
train_92762 | For example, in the following we can propagate "Saturday" from a "Justice-Convict" event to a "Justice-Sentence" event because they both involve arguments "A state security court/state" and "newspaper/Monitor": [Sentence including EM i ] A state security court suspended a newspaper critical of the government Saturday after convicting it of publishing religiously inflammatory material. | it will be valuable to design inference methods for more fine-grained events. | neutral |
train_92763 | Topic models have been used to study popularity of communities (Griffiths and Steyvers, 2004), the history of ideas (Hall et al., 2008), and scholarly impact of papers (Gerrish and Blei, 2010). | we ran 50 iterations for both categories, which was chosen as a reasonable trade-off between pattern precision and recall based on some earlier pilot experiments. | neutral |
train_92764 | We introduce a dataset of abstracts labeled with the three categories. | topic models do not extract detailed information from text as we do. | neutral |
train_92765 | (2008) and Miwa et al. | all the experiments are evaluated using commonly-used Precision (P), Recall (R) and harmonic F1-score (F1). | neutral |
train_92766 | Finally, since the dependency type between "PROT1" and "Association" is prep_between, the preposition word "between" and its constituent ancestors are added into the SCP as rendered by the dotted lines. | this paper proposes a principled way to automatically generate constituent structure representation for tree kernel-based protein-protein interaction (PPI) extraction. | neutral |
train_92767 | Since determining protein interaction partners is crucial to understand both the functional role of individual proteins and the organization of the entire biological process, there is a significant interest in protein-protein interaction (PPI) extraction. | 4) Merge any two consecutive NP/VP nodes along the paths into a single one. | neutral |
train_92768 | This indicates the importance of the shortest dependency path over other paths in the dependency path tree or the dependency graph. | we can reshape the constituent parse tree by making use of the shortest dependency path between two proteins. | neutral |
train_92769 | 2007), graph kernels (Airola et al., 2008) and subsequence kernels (Kim et al., 2010), show some promising results for PPI extraction. | while tree kernels based on constituent parse trees achieve great success in semantic relation extraction (Zhang et al., 2006;Zhou et al., 2007a;Qian et al., 2008) and semantic role labeling (Moschitti, 2004;Zhang et al., 2008) from the newswire narratives, they haven't been fully explored for PPI extraction in the biomedical domain. | neutral |
train_92770 | In our experiments, we first take dev.a as our development set for minimum-error rate tuning (Och, 2003) and then report the final translation accuracies on dev.b. | most of current syntax-based SMT systems use IBM models (Brown et al., 1993) and hidden Markov model (HMM) (Vogel et al., 1996) to generate word alignments. | neutral |
train_92771 | Our results show that source-reordering is beneficial for the language pairs with high mutual word order disparity. | in this paper we take the idea of learning source permutation one step further along a few dimensions. | neutral |
train_92772 | Subsequently, the siblings under S in the resulting tree are permuted, "must" is reordered across the whole clause and placed to the first position (see Figure 3c). | the model in (Li et al., 2007) is explicitly aimed at long-distance reorderings (English-Chinese), prunes the alignment matrix gradually to fit the source syntactic parse and employs Maximum-Entropy modeling to choose the optimal local ITG-like permutation step of sister subtrees but interleaves that step with a translation step. | neutral |
train_92773 | In order to illustrate the performance of the different reordering models, we consider two training sentences taken from the IWSLT 2010 DIALOG task. | for the previous orientation weight, the probability P c (p, o) of the phrase pair p having a given orientation o, considering the set P rev(o), with all phrase pairs that are linked to p that would lead to a orientation o, is given by: In this equation, α(p ) is the number of paths to the p node from the first phrase, β(p) is the number of paths from p to the last phrase and β(bs) results in the number of possible paths. | neutral |
train_92774 | In this case, the phrase pair "不需要 (bu xu yao)"→"need not" and the phrase pair "它 (ta)"→"it" are linked by an edge even though they are not adjacent. | if we translate "疼 (teng)" by itself, we would have to translate "很 (hen)" without "疼 (teng)". | neutral |
train_92775 | If the learner is uncertain about an instance, that shows that the learning model is not able to deal with the instance properly. | the original corpora was randomly partitioned into 5 parts, out of which, a single part was retained for testing the model, and the remaining 4 parts were used for the training and applying our instance selection strategies. | neutral |
train_92776 | The test data was only used for the final accuracy report. | based on the work of Andrew and Gao 2007, we know that OWL-QN method guarantees the convergence. | neutral |
train_92777 | All pairs knew each other previously and were of the same gender and approximately the same age. | the task becomes more complicated than typical coreference resolution for written texts because a referent is considered as either anaphoric (i.e. | neutral |
train_92778 | The entries of each chromosome are randomly initialized to either 0 or 1. | in general, in the t th iteration, a combined population R t = P t + Q t is formed. | neutral |
train_92779 | In anaphora resolution, 1 as in other HLT tasks, optimization to a metric is essential to achieve good performance (Hoste, 2005;Uryupina, 2010). | this is obviously less than the evaluation figure obtained by optimizing the first metric. | neutral |
train_92780 | Event coreference resolution is an important task in natural language processing (NLP) research. | the arguments information is extracted automatically from the premodifiers and propositional phrase attachments. | neutral |
train_92781 | Without a successful event coreference resolution such separated pieces of information cannot be assembled properly. | ideally, the selected instances should represent the coreferent status between any two mentions. | neutral |
train_92782 | Last section will wrap up with a conclusion and future research directions. | we propose a revised training instance selection strategy which reflects the true sample space of the original coreferent/non-coreferent status between mentions. | neutral |
train_92783 | Any verb phrase alignment pair with score lower than a threshold score of 0.5 is ignored. | there are still some errors in respect to the morphology of the verb phrases, which the Language Model is unable to tackle. | neutral |
train_92784 | We do not have a list of transliteration pairs for the training of our Hindi to Urdu transliteration system. | the authors wish to thank the anonymous reviewers for their comments. | neutral |
train_92785 | It is complicated to define different semantic types, and is tedious to train a large number of models used for different semantic information. | this semantic information for translation is various and difficult to classify. | neutral |
train_92786 | The similar action is applied to SC (the context of k sc ). | semantic information is various for NE translation. | neutral |
train_92787 | Because the proposed three features cannot be used separately, we do not compare their individual effectiveness. | they should be weighted differently according to their contributions. | neutral |
train_92788 | These figures reveal that both distributions of deletion and insertion are comparable. | our approach differs from them in that we employ the wisdom of crowds of native speakers, not necessarily language teachers, to compile a large-scale learners' corpus. | neutral |
train_92789 | Our work differs from them in that we (1) do not restrict ourselves to a specific error type such as mass noun; and (2) exploit a large-scale real world data set. | both models corrected the examples below: Original: the model trained on 0.3M sentences corrected the following example: Original: (the learner made an error in conjugation form.) | neutral |
train_92790 | For comparison purposes we include in this table results from using only first level features (FLF), only modality specific meta features (MSMF), and the combination of both (MSMF+FLF). | we also include percentages of capitalized words, use of quotations, and use of signature, that we believe allow writers more freedom to express their unique writing style. | neutral |
train_92791 | That means that we end up with different arrangements of the training instances into clusters, one arrangement per modality. | in the typical scenario, we may have an entire document (several pages long), or even an entire book, while in the case of online data from social media we will have very short texts that are a couple of sentences long. | neutral |
train_92792 | The meta features in our work are derived from clustering of the feature vectors. | the results in table 8 show that the best accuracy in the CHE collection is achieved by our method in all four data sets. | neutral |
train_92793 | The last section summarizes our findings and outlines our research goals for the immediate future. | to the best of our knowledge, this is the first work exploiting characterbased language models for AA, although, Raghavan et al. | neutral |
train_92794 | For larger k values only the data set with 5 authors reached better results. | in our framework instead of having a single feature vector for x, we generate m smaller vectors that contain complementary types of features, or views, describing the instances. | neutral |
train_92795 | The features include TF•IDF, "First occurrence", and "Is in title or not". | a simple hill-climbing search can be used to optimize F1-Score. | neutral |
train_92796 | This paper is concerned with the TREC REF track. | many semi-supervised methods have been proposed to recognize fine-grained types of entities. | neutral |
train_92797 | Fleischman (2002) employed a supervised learning method that considered the local context surrounding the entity as well as global semantic information. | approaches proposed in expert search can be used for entity finding. | neutral |
train_92798 | It has been reported that SRL can benefit from phrase-structure and dependency-based syntactic parsing (Hacioglu, 2004 Pradhan et al., 2005). | much research efforts have been devoted to statistical machine learning methodologies for SRL (Bjkelund et al., 2009;Gildea and Jurafsky, 2002;Shi et al., 2009;Johansson and Nugues, 2008a;Lang and Lapata, 2010;Pradhan et al., 2008;Fürstenau and Lapata, 2009;Titov and Klementiev, 2011, among others). | neutral |
train_92799 | We also applied the solution of combining informativity and representativeness (4.3) to other informativity-based strategies. | ours is the first work to use such a combined strategy for SRL. | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.