id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_93500 | 2(e) for the blockcutpoint graph of the example. | the runtime for checking (c) amortizes to O(s) per rule. | neutral |
train_93501 | Here, we apply this constraint only to the representation vectors {a i }. | we consider two alternative transformations. | neutral |
train_93502 | These methods can efficiently estimate the co-occurrence statistics to model contextual distributions from very large text corpora and they have been demonstrated to be quite effective in a number of NLP tasks. | firstly, many different types of semantic knowledge can all be represented as a number of such ranking inequalities, such as synonymantonym, hyponym-hypernym and etc. | neutral |
train_93503 | This material is based in part on research sponsored by the NSF under grant IIS-1249516 and DARPA under agreement number FA8750-13-2-0017 (the DEFT program). | our classifier uses extensive feature sets to scale natural logic to the enormous number of phrase pairs in PPDB. | neutral |
train_93504 | In this paper, we show how a clear concept of semantics can be applied to large-scale paraphrase resources. | the views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government. | neutral |
train_93505 | We have not mentioned the effect of our unary merging (Section 3), but the result indicates it has almost the same effect as the previously proposed padding method (Zhu et al., Shift- We divide baseline state-of-the-art systems into three categories: shift-reduce systems (Sagae and Lavie, 2005;Sagae and Lavie, 2006;Zhu et al., 2013), other chart-based systems (Petrov and Klein, 2007;Socher et al., 2013), and the systems with external semi supervised features or reranking (Charniak and Johnson, 2005;McClosky et al., 2006;Zhu et al., 2013). | all systems are trained with the structured perceptron. | neutral |
train_93506 | The key ideas are new feature templates that facilitate state merging of dynamic programming and A* search. | for example, if we lexicalize this model by adding features that depend on the head indices of s 0 and/or s 1 , it increases to O(n 6 • |G| • |N |) since we have to maintain three head indices of A, B, and C. This is why Sagae and Lavie's features are too expensive for our system; they rely on head indices of s 0 , s 1 , s 2 , s 3 , the left and right children of s 0 and s 1 , and so on, leading prohibitively huge complexity. | neutral |
train_93507 | To this end, we propose tree approximation to obtain CCG sub-graphs under the factorization using tree parsing algorithms. | nevertheless, empirical evaluation indicates that explicitly or implicitly using tree-structured information plays an essential role. | neutral |
train_93508 | We also use the syntactic dependency trees provided by the CCGBank to obtain necessary information for graph parsing. | we also use the syntactic dependency trees provided by the CCGBank to obtain necessary information for graph parsing. | neutral |
train_93509 | Given these sentence representations, we predict the similarity scoreŷ using a neural network that considers both the distance and angle between the pair (h L , h R ): where r T = [1 2 . | for example, it is a good choice for dependency trees, where the number of dependents of a head can be highly variable. | neutral |
train_93510 | This is because 1) the responses retrieved by retrieval-based method are actually written by human, so they do not suffer from grammatical and fluency problems, and 2) the combination of various feature functions potentially makes sure the picked responses are semantically relevant to test posts. | it is a bit surprising that this can be achieved to a reasonable level with a linear transformation in the "space of representation", as validated in Section 5.3, where we show that one post can actually invoke many different responses from NRM. | neutral |
train_93511 | Each topic also has 10 documents and 4 model summaries. | in addition, we also extract the clauses functioning as subjects of sentences as NPs, such as "that clause". | neutral |
train_93512 | Recall that we require that one NP and at least one VP compose a sentence. | existing multi-document summarization (MDS) works can be classified into three categories: extraction-based approaches, compression-based approaches, and abstraction-based approaches. | neutral |
train_93513 | A salience score is calculated for each phrase to indicate its importance. | each VP is represented as a set of its concepts and the index value is calculated for each pair of VPs. | neutral |
train_93514 | We therefore include it for comparison purposes. | 9 in matrix form: We will show that lim Here, Because α < 1, this column sum converges to zero when t → ∞. | neutral |
train_93515 | We perform a leave-one-out evaluation of each event. | this improvement is statistically significant for all ngram precision, recall, and Fmeasures at the α = .01 level using the Wilcoxon signed-rank test. | neutral |
train_93516 | The exemplar sentences from the exemplar selection stage are the most salient and representative of the input for the current hour. | in this model, input sentences are apriori equally likely to be exemplars; the salience values are uniformly set as the median value of the input similarity scores, as is commonly used in the AP literature (Frey and Dueck, 2007). | neutral |
train_93517 | We build on political science and communication theory and use probabilistic topic models combined with time series regression analysis (autoregressive distributed-lag models) to gain insights about the language dynamics in the political processes. | ordinary least square regression finds the coefficients that minimize the mean square error of Y = b + j w T j X j given (X, Y ). | neutral |
train_93518 | Previous linguistic studies have focused on identifying factors that might influence choices of referring expressions. | we propose a language production model that uses dynamic discourse information to account for speakers' choices of referring expressions. | neutral |
train_93519 | These models propose deterministic constraints governing when pronouns are preferred in local discourse, but it is not clear how these would account for speakers' choices of referring expressions, nor it is clear why there should be such deterministic constraints. | it is not clear from this previous work how and why these factors result in the observed patterns of referring expressions. | neutral |
train_93520 | The eventual betrayer is more positive, more polite, but plans less than the victim. | cooperation and betrayal do not happen in a cell cut off from the rest of the world. | neutral |
train_93521 | The best setting for the model parameters 8 is selected via 5-fold cross validation, ensuring that instances from the same game are never found in both train and validation folds. | imminent betrayal is signaled by sudden changes in the balance of conversational attributes such as positive sentiment, politeness, and structured discourse. | neutral |
train_93522 | In this case, training with cold instances is naturally more efficient than training with other types of diseases/symptoms. | the task of analyzing the modality lies beyond of scope of this study (Kitagawa et al., ). | neutral |
train_93523 | Discrepancy: number of occurrences of words, such as should, would, could, etc as defined in LIWC (Tausczik and Pennebaker, 2010). | by applying the maximum weighted matching algorithm on this graph, we can obtain the best role assignment for each team. | neutral |
train_93524 | What is similar between stances and personas on the one hand and roles on the other is that the unit of analysis is the person. | then we describe a series of role identification models. | neutral |
train_93525 | By incorporating constraints into the role identification process, we expect to guide the model using human intuition such that the results will be more interpretable, although the prediction error might increase because of the limitation of the search space. | these approaches do not standardly utilize an outcome as supervision to guide the clustering. | neutral |
train_93526 | Theoretically, word dropout can also be applied to other neural network-based approaches. | we present a simple deep neural network that competes with and, in some cases, outperforms such models on sentiment analysis and factoid question answering tasks while taking only a fraction of the training time. | neutral |
train_93527 | Table 2: The DAN achieves slightly lower accuracies than the more complex QANTA in much less training time, even at early sentence positions where compositionality plays a bigger role. | there is a tradeoff: syntactic functions require more training time than unordered composition functions and are prohibitively expensive in the case of huge datasets or limited computing resources. | neutral |
train_93528 | Most of the existing approaches to learning to rank can be generally grouped into three major categories: (i) pointwise approaches, (ii) pairwise approaches, and (iii) listwise approaches. | to overcome the above limitations, this paper investigates SOLAR -a new framework of Scalable Online Learning Algorithms for Ranking, which aims to learn a ranking model from a sequence of training data in an online learning fashion. | neutral |
train_93529 | To overcome the limitations, this paper presents SOLAR a new framework of Scalable Online Learning Algorithms for Ranking, to tackle the challenge of scalable learning to rank. | for example, [12,19] formulated ranking as a regression problem in diverse forms. | neutral |
train_93530 | Bold font on Figure 1 indicates that a vertex belongs to the main core of the graph. | then in its graph representation (its "structural formula"), it is crucial to differentiate between the multiple nodes labeled the same (e. g., C or H). | neutral |
train_93531 | • Amazon: 8,000 product reviews over four different sub-collections (books, DVDs, electronics and kitchen appliances) classified as positive or negative -split into 1,600 for training and 400 for test each (Blitzer et al., 2007). | we believe new kernels that support a very high number of unique node labels could yield even better performances. | neutral |
train_93532 | Moreover, during training we jointly model the alignment and the translation process simultaneously for different language pairs under the same framework. | experiments that demonstrate the effectiveness of our framework will be described in section 4. | neutral |
train_93533 | Different from conventional statistical machine translation approaches, neural machine translation approaches aim at learning a radically end-to-end neural network model to optimize translation performance by generalizing machine translation as a sequence learning problem. | in reality, language pairs between English and many other target languages may not be large enough, and pivot-based SMT sometimes fails to handle this problem. | neutral |
train_93534 | The detailed results are shown in Table 4 (4-th and 6-th rows). | each context feature is represented as a single vector called feature embedding. | neutral |
train_93535 | doctors and nurses or accountants and assistant accountants. | spectral clustering is based on applying sVD to the graph Laplacian and aims to perform an optimal graph partitioning on the NPMI similarity matrix. | neutral |
train_93536 | To examine this distinction better, we use the Jensen-Shannon divergence (JSD) to quantify the difference between the topic distributions across every Figure 2: CDFs for six of the most important topics; the x-axis is on the log-scale for display purposes. | these offer the advantages of not using the validation set and the interpretability properties we highlight in the next section. | neutral |
train_93537 | LSH based tracking segments the vector-space randomly without consideration of the data's distribution. | the key to scale up documents and topics, lies in reducing the number of necessary comparisons. | neutral |
train_93538 | By formulating such ideas as search or MDL problems of given coding length 1 , word boundaries are found in an algorithmic fashion (Zhikov et al., 2010;Magistry and Sagot, 2013). | when we reach the unigram G 1 and need to use a base measure G 0 , i.e. | neutral |
train_93539 | In many cases PYHSMM found more "natural" segmentations, but it does not always conform to the gold annotations. | finally, we note that our proposed model for unsupervised learning is most effective for the language which we do not know its syntactic behavior but only know raw strings as its data. | neutral |
train_93540 | To automatically recognize such linguistic phenomena beyond small "correct" supervised data, we have to extract linguistic knowledge from the statistics of strings themselves in an unsupervised fashion. | we could exclude the latter case 1 For example, Zhikov et al. | neutral |
train_93541 | Our coupled model has two major parameters to be decided. | the key idea is to bundle two sets of POS tags together (e.g. | neutral |
train_93542 | As a widely-used structural classification problem, sequence labeling is prone to suffer from the data sparseness issue. | again, algorithm 1 is used to merge the two data with N ′ = 5K and M ′ = 5K. | neutral |
train_93543 | Along with this paper, we will publish AutoExtend for extending word embeddings to other data types; the lexeme and synset embeddings used in the experiments; and the code needed to replicate our WSD evaluation 2 . | the dataset is the closest we could find for sense similarity. | neutral |
train_93544 | Note that the pairs in the two monolingual datasets should be previously aligned. | we then discuss the details of our procedure for the construction of the Spanish and Farsi word similarity datasets in Section 3. | neutral |
train_93545 | Multi-index techniques allows the very fast computation of top-k queries (Norouzi et al., 2012) on the Hamming space. | we favor the parentship distance by selecting first the signatures where one bit differs from the parent's one. | neutral |
train_93546 | (2) The initial value of sense representation is critical for most statistical clustering based approaches. | this paradigm usually has two limitations: (1) The performance of these approaches is sensitive to the clustering algorithm which requires the setting of the sense number for each word. | neutral |
train_93547 | It is also capable of taking the ordering of words into account, and collecting information from arbitrary features associated with the context. | we demonstrate the effectiveness of our model by applying it to a specific application: predicting topics and sentiments in dialogues. | neutral |
train_93548 | We summarize the experiment results in Table 1. | here, s t represents the sentence at time t, and u t represents additional features. | neutral |
train_93549 | We have released the human annotation data at https://sites.google.com/ site/forrestbao/acl_data.tar.bz2 . | it is necessary to isolate review helpfulness prediction from its outer layer tasks and formulate it as a new problem. | neutral |
train_93550 | For example, in task a, p(positive) = q 3 + q 4 + q 5 . | as recommended in the documentation, we apply repeated SGD over 20 reorderings of each corpus (for comparability, this was also done when fitting Word2Vec). | neutral |
train_93551 | We thank the anonymous reviewers for their detailed and insightful comments on earlier drafts of this paper. | the graph is heterogeneous, with two types of nodes for the news sentences and tweets respectively. | neutral |
train_93552 | M-L captures this notion using the difference in the cross-entropies according to each language model (LM). | we have demonstrated that extracting paraphrases from subsampled data results in higher precision domain-specific paraphrases. | neutral |
train_93553 | The main limitation of the aforementioned methods is the dependence on simplified corpora and WordNet. | (2011), which both used simplified corpora. | neutral |
train_93554 | In addition to that, certain street and landmark names might not be depicted at different zoom levels. | referential attributes of other landmark objects were not modelled due to data sparsity and also to reduce computational costs. | neutral |
train_93555 | This strategy, however, turns out to be far less common in language use if a more discriminatory property is available, as in the example. | this seems to be the case, for instance, of the colour attribute. | neutral |
train_93556 | We refer the reader to Fang et al. | as we will demonstrate in Section 3.5, the generated captions perform significantly better than the nearest neighbor captions in terms of human quality judgements. | neutral |
train_93557 | We extend this intuition to the unsupervised setting of Google image search results and apply it to the lexical entailment task. | we also find some drop in accuracy on the longest paths (bucket 5), especially for wBLESS and BIBLESS, perhaps because semantic similarity is difficult to detect in these cases. | neutral |
train_93558 | (2015), who try different ways of modulating the inclusion of perceptual input in their multi-modal skip-gram model, and find that the entropy of the centroid vector µ works well (where p(µ j ) = µ j || µ|| and m is the vector length): We calculate the directionality of a hyponymhypernym pair with a measure f using the following formula for a word pair (p, q). | examples of pairs in the respective datasets can be found in Table 1. | neutral |
train_93559 | On WBLESS we compare to the reported results of Weeds et al. | since even cohyponyms will not have identical values for f , we introduce a threshold α which sets a minimum difference in generality for hypernym identification: s(p, q) > 0 iff f (q) > f (p) + α, i.e. | neutral |
train_93560 | We use the word2vec tool (Mikolov et al., 2013) to train monolingual vectors, 6 and the CCA-based tool (Faruqui and Dyer, 2014) for projecting word vectors. | to achieve this goal, we replace transliteration by a new technique that captures more complex morpho-phonological transformations of historically-related words. | neutral |
train_93561 | Romanian-English systems obtain only small (but significant for 4K and 8K, p < .01) improvement. | the donor language is used as pivot to obtain translations via triangulation of OOV loanwords ( §2.2). | neutral |
train_93562 | To allow the low-resource system to leverage good translations that are missing in the default phrase inventory, while being stable to noisy translation hypotheses, we integrate the acquired translation candidates as synthetic phrases (Tsvetkov et al., 2013;Chahuneau et al., 2013). | the borrowing system only minimally overgenerates the set of output candidates given an input. | neutral |
train_93563 | In this paper, we introduce a novel recurrent neural network based rule sequence model to incorporate arbitrary long contextual information during estimating probabilities of rule sequences. | (2011) discussed in Section 1, there are several other works using a rule bigram or trigram model in machine translation, Ding and Palmer (2005) use n-gram rule Markov model in the dependency treelet model, Liu and Gildea (2008) applies the same method in a tree-tostring model. | neutral |
train_93564 | They first translate Japanese input into head final English texts obtained by the method of Isozaki et al. | any reordering of a(i, p) changes only the first term and the others are unchanged. | neutral |
train_93565 | This is unfortunate since the CARs directly indicate the comprehensibility of the translated dialogues. | the translations by H S were created by first preparing a file containing all the sentences from the 40 problems in a randomized order and then asking a translator to translate the file sentence-by-sentence, without assuming any specific context. | neutral |
train_93566 | For a given sentence, the greedy unsupervised RAE greedily searches a pair of words that results in minimal reconstruction error by an autoencoder. | we conducted experiments on the wMT metric task data. | neutral |
train_93567 | Each of the above mentioned representations has a different strength in terms of encoding syntactic and semantic contextual information for a given sentence. | this is mainly caused by the flexible word ordering and the existence of the large number of synonyms for words. | neutral |
train_93568 | Consider, for instance, the first reference (denoted as "1 R" in Table 2) and their translations. | to leverage the different advantages and focuses, in terms of benefiting evaluation, of various representations, we concatenate the three representations to form one vector representation for each sentence. | neutral |
train_93569 | The results show that ISF achieves relatively lowε s and increases the domain separation error. | we use standard features including entity types, entity head words, contextual words and other syntactic features derived from parse trees. | neutral |
train_93570 | In order to capture enough variations, we randomly initialize the set of filters to detect different structure patterns. | the parser works substantially better on the TREC dataset since all questions are in formal written English, and the training set for Stanford parser 5 already includes the QuestionBank (Judge et al., 2006) which includes 2,000 TREC sentences. | neutral |
train_93571 | The n-gram convolutions help identify local context and map that to a new higher level feature space. | this gives a high level representation of a movie. | neutral |
train_93572 | Applying multiple hidden layers in succession can require exponentially less data than mapping through a single hidden layer (Bengio, 2009). | (2010) preprocessed the text by stemming, down- casing, and discarding feature instances that occurred in fewer than five reviews. | neutral |
train_93573 | SkipGram can be categorized as one of the simplest neural language models (Mnih and Kavukcuoglu, 2013). | then, by using the unified form, we extract the factors of the configurations that they use differently. | neutral |
train_93574 | Namely, it first counts all the co-occurrences in D, and then, it leverages the gathered co-occurrence information for estimating (possibly better) parameters. | these results are shown in the first and second rows in table 4. | neutral |
train_93575 | θ ∈ R d and γ ∈ R d and defines a joint probability of y and z conditioned on x as follows: where Y(x, z) is the set of all possible label sequences for x and z, and HUCRF forces the interaction between the observations and the labels at each position j to go through a latent variable z j : see Figure 1 for illustration. | hUCRF with pre-training with Brown clusters (hUCRF B ) and CCA-based clusters (hUCRF C ) further improves performance to 91.36% and 91.37%, respectively. | neutral |
train_93576 | This dataset is an interesting * Work done when student at University of Trento resource that we make available to the research community 1 . | such clues are retrieved using a BM25 model on CPDB. | neutral |
train_93577 | To summarize, our algorithm consists of the following steps: 1. | unlike our work, none of these authors developed automatic methods for studying syntactic properties like word order, nor did they utilize recent advances in the field of word alignment algorithms. | neutral |
train_93578 | Similar results at the lexical level have been reported using automated annotations (Prud'hommeaux and Rouhizadeh, 2012;Rouhizadeh et al., 2013). | we then calculate two above statistics for each shuffle and count the number of times the observed values exceed the values produced by the 1000 shuffles. | neutral |
train_93579 | 2Ranking Order Determination: given the comparison set and the selected dimension, determine the order in which the involved items are ranked within the comparisons, in an ascending or descending order? | we build a Naïve Bayes model using cooccurrences as a baseline to predict proper dimensions. | neutral |
train_93580 | Its peak is 8,848 metres (29,029 ft) above sea level. | the probability of ith predicate in Freebase is chosen as the comparison dimension given superlative S and its context C can be written as: where z i is estimated using a sigmoid function: where n is the number of candidate predicates for superlative S, m is the length of concatenated vector V , W m×n is the parameter matrix, b is the bias vector, and σ i is the sigmoid function that applies to the i-th element of argument vector. | neutral |
train_93581 | A chance baseline is obtained by randomly ranking a concept's nearest neighbors. | this approach allows for zero-shot learning, where the model can predict how an object relates to other concepts just from seeing an image of the object, but without ever having seen the object previously (Lazaridou et al., 2014). | neutral |
train_93582 | In extracting predicate-argument structures, it is not possible to directly extract a coordinated noun phrase "wine and sake" as a direct object of the verb "drank". | basically, the cases "ga", "o" and "ni" in the corpus correspond to "nsubj", "dobj" and "iobj", respectively, however, we should apply the alternative conversion to passive or causative voice, since the annotation is based on active voice. | neutral |
train_93583 | Recent work on supertagging using a feedforward neural network achieved significant improvements for CCG supertagging and parsing (Lewis and Steedman, 2014). | the output layer represents probability scores of all possible supertags, with the size of the output layer being equal to the size of the lexical category set. | neutral |
train_93584 | , w N ), the embedding feature of w t (for 1 ≤ t ≤ N ) is obtained by projecting it onto a n-dimensional vector space through the look-up table L w ∈ R |w|×n , where |w| is the size of the vocabulary. | their attempt to tackle the third problem by pairing a conditional random field with their feed-forward tagger provided little accuracy improvement and vastly increased computational complexity, incurring a large efficiency penalty. | neutral |
train_93585 | The oracle is very efficient, computing loss in O(n), compared to O(n 8 ) for the only previously known dynamic oracle with support for a subset of non-projective trees (Gómez-Rodríguez et al., 2014). | to do so, we can implement WEAKLYCONNECtED so that the first call computes the connected components of A in linear time (Hopcroft and tarjan, 1973) and subsequent calls use this information to find out if two nodes are weakly connected in constant time. | neutral |
train_93586 | Performance of the Random-reduced classifier is also better than Unfiltered, with the overall improvement largely resulting from increased recall, but below PRA-reduced. | this assumption is violated when the knowledge base is incomplete which can lead to sentences containing instances of relations being wrongly annotated as negative examples. | neutral |
train_93587 | The results in the first two rows indicate that adding unsmoothed lexical information to the method of Xu et al. | table 1: Data sets and their size (number of sentences). | neutral |
train_93588 | Relation Extraction (RE) is the task of recognizing relationships between entities mentioned in text. | this work was funded in part by Alberta Innovates Center for Machine Learning (AICML), Natural Sciences and Engineering Research Council of Canada (NSERC), Alberta Innovates technology Futures (AItF), and National Institute of Informatics (NII) International Internship Program. | neutral |
train_93589 | One is the 50-d embeddings provided by SENNA (Collobert et al., 2011). | we can distinguish these two paths by virtue of the attached subtrees such as "dobj→commandment" and "dobj→ignition". | neutral |
train_93590 | We propose a new approach to the task of fine grained entity type classifications based on label embeddings that allows for information sharing among related labels. | for example, if the (ordered) top-k labels are person, artist, and location, we output only person and artist as the predicted labels. | neutral |
train_93591 | Our approach is evaluated on two datasets, one comprising clinical reports and the other comprising biomedical abstracts, achieving state-of-the-art results. | one can query Wikipedia for the test mention's string, then employ the titles of the retrieved pages as alternate mention names. | neutral |
train_93592 | For example, Open IE chooses different predicate and argument boundaries and assigns different relations between them. | while our comparisons are valid only for the tested tasks and systems, they do provide valuable evidence for the general question of effective intermediate structures. | neutral |
train_93593 | But it is hard for machine to capture such long distance information. | it is hard for the classifier to recover "它(it)", e.g., "*pro* 这种?(*pro* that kind?)" | neutral |
train_93594 | 1 In addition, motivated by work on overt pronoun resolution, we hypothesize that AZP resolution can be improved by exploiting discourse information. | early approaches to AZP resolution employed heuristic rules to resolve AZPs in Chinese (e.g., Converse (2006), Yeh and Chen (2007)) and Spanish (e.g., Ferrández and Peral (2000)). | neutral |
train_93595 | Theorem 1 implies that, at each step k, R k in (9) can grasp the first (2 k − 1)-th terms of S, whereas S k in (4) can grasp only the first k-th terms of S. Thus, given the number of steps K, Co-Simmate is always more accurate than Co-Simrank because R K is exponentially closer to S than S K to S. Convergence Rate. | the experiments show that Co-Simmate can be 10.2x faster than the state-ofthe-art competitors. | neutral |
train_93596 | For a given set of subtopic candidates with annotated subtopics, {(Sn, yn)} (1≤n≤N), we need to estimate the optimal weight w. Empirically, the optimal weight w should minimize the error between the predicted partition y ' and the true partition y, and it should also have a good generalization capability. | these results confirm that the similarity between two subtopic candidates is affected by many factors and our methods can achieve more desirable query subtopics by learning a similarity measure. | neutral |
train_93597 | Compared with AC, SC achieves 1.86% precision improvement, 2.76% recall improvement, and 2.32% F-Measure improvement. | it is not a trivial task because the underlying intents of the same query may be different for different users. | neutral |
train_93598 | During backtracking, pseudo-nodes are expanded into an acyclic directed graph, i.e., our final category topic hierarchy H c . | after replacing the tags in a CT (see Fig.1) with the topics they belong to, we can then get a topic hierarchy H a = {T a , R a } for each article a. | neutral |
train_93599 | The present work is closely related to previous approaches involved in TempEval campaigns Verhagen et al., 2010;Uz-Zaman et al., 2013;Llorens et al., 2015). | the results of an out of the competition version of the SPINOZAVU team explained in (Caselli et al., 2015). | neutral |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.