id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_15900 | Both dict2vec and retrofitting improve with regards to word2vec on similarity benchmarks and seem roughly on par. | dict2vec fails to improve on relatedness benchmarks, whereas retrofitting sometimes improves (as in RG, MEN, and MT), sometimes equals (SCWS) and does worse (353). | contrasting |
train_15901 | These widely-used representations often significantly improve performance in downstream tasks, as they are able to leverage large amounts of unstructured data. | most of the popular collections of word embeddings assign only one vector to each word, thus shifting the burden of word disambiguation to deeper, task-specific layers, which commonly rely on data of much smaller scales. | contrasting |
train_15902 | Specifically, we make following contributions: • We build a dataset consisting of 11.3k sentences containing a frequently used comparator "oe" for simile recognition in Chinese, which can support data-driven approaches. | to English, datasets on simile or • We propose a neural multitask learning framework jointly optimizing three tasks: simile sentence classification, simile component extraction and language modeling. | contrasting |
train_15903 | If the component extractor knows that a sentence contains a simile, it would be more confident to extract the tenor and the vehicle. | if the component extractor tells the classifier that the tenor and the vehicle likely exist, the classifier gets additional information for decision. | contrasting |
train_15904 | For each word, instead of using hidden state h t only, we combine h t and its score vector p t as a representation s t : Since p t is directly related to the component extraction task, this labeling connection operation increases the interaction between the two tasks. | words in a sentence should not contribute the same for classification. | contrasting |
train_15905 | As shown in Table 4, the optimized pipeline performs better than the strongest multitask learning setting. | in all settings, the precision scores are lower compared with the recall scores. | contrasting |
train_15906 | Finally, and most similar to our approach, several models have been proposed that directly compare subtrees between two sentences (Chen et al., 2017;Zhao et al., 2016). | all of these models are pipelined; they obtain the sentence structure in a non-differentiable preprocessing step, losing the benefits of end-to-end training. | contrasting |
train_15907 | This also validates the observation of (Chen et al., 2017b), which shows what the sub vector in the ESIM model is looking out for contradictory information. | our architecture allows the inspection of these vectors since they are compressed via factorization, leading to larger extents of explainability -a quality that neural models inherently lack. | contrasting |
train_15908 | State of the art models using deep neural networks have become very good in learning an accurate mapping from inputs to outputs. | they still lack generalization capabilities in conditions that differ from the ones encountered during training. | contrasting |
train_15909 | gain), and was not detrimental to ESIM. | in case of the fastText [MIMIC-III] embeddings knowledge-directed attention was beneficial to both models, as shown in Table 7. | contrasting |
train_15910 | Recently, neural network based approaches (Dong and Lapata, 2016;Jia and Liang, 2016;Xiao et al., 2016;Guu et al., 2017;Dong et al., 2018) have achieved promising performance in semantic parsing. | neural network approaches are data hungry, which performances closely correlate with the volume of training data. | contrasting |
train_15911 | Specifically, the optimal parameters θ * are obtained as follow: Gradient descent is used to search out optimal parameters θ * . | mLE fails to consider the fact that semantically equivalent regular expressions might be syntactically different. | contrasting |
train_15912 | We encourage the model to generate regular expressions to maximize expected rewards instead of MLE. | concretely, we train the model parameters θ to maximize the following objective function: to compute the expected reward, we need to go over all possible regular expressions, and the number of all possible regular expressions is infinite. | contrasting |
train_15913 | A challenge here is that we do not know the samples before performing the policy-gradient method. | we find that there is a high chance to get the same samples repeatedly when the model is pre-trained using MLE on the training set, because sampling is following the distribution learned by the pre-trained model. | contrasting |
train_15914 | SENNA and other typical word embeddings always assign an identical vector to each word regardless of the input context. | eLMo assigns different vectors to each word depending on the input context. | contrasting |
train_15915 | Similar to (Rabinovich et al., 2017), our model structures the decoder as a collection of recursive modules. | as we discussed in the related work section, we make use of a SQL specific grammar to guide the decoding process, which allows us to take advantage of SQL queries' welldefined structure. | contrasting |
train_15916 | The Seq2Seq models suffer from generating ungrammatical queries, yielding very low exact matching accuracy on Hard and Extra Hard SQL queries. | our model generates valid SQL queries by enforcing the syntax. | contrasting |
train_15917 | Evaluating such forms is crucial to the development of parsing algorithms. | there is no method directly available for evaluation. | contrasting |
train_15918 | cal form does not need to be given by the annotator (QA denotations, for example, require the annotator to know the correct answer to the question -an assumption which doesn't hold for end-users who asked the question with the goal of obtaining the answer). | our goal is to leverage natural language feedback and corrections that may occur naturally as part of the continuous interaction with the non-expert end-user, as training signal to learn a semantic parser. | contrasting |
train_15919 | For at least one of the conditions in the SQL query, its column name is not explicitly mentioned in the question, not even in paraphrased form. | there are still partial semantic clues for inference. | contrasting |
train_15920 | generates each of these individual components by first predicting COLUMN and OP, and then generating VALUE using pointer networks (Vinyals et al.). | we propose a two step approach to tackle this problem in the reverse way, taking advantage of the content of table t as additional knowledge: (i) Generate the condition VALUE from the question, and (ii) predict which COLUMN and OP apply to this VALUE. | contrasting |
train_15921 | BASELINE refers to a baseline for our models where the WHERE clause accuracy is computed by assuming that each candidate (COLUMN, VALUE) pair is included with corresponding OP being equality. | uPPERBOuND accuracy is computed by assuming f cond and f op makes 100% correct mapping of whether to include a candidate (COLuMN, VALuE) pair and which OP to apply on this condition. | contrasting |
train_15922 | (2016) focus on multichoice questions. | study how to answer user questions with table cells from millions of HTML tables. | contrasting |
train_15923 | It yields the lowest crossing arcs rate since lots of concepts are not aligned. | empty Alignment Issue: our hybrid aligner yields less empty alignments. | contrasting |
train_15924 | The underlying hypothesis is that enriching word vector representations with polysemic information should express itself in performance gains in these tasks. | this hypothesis has never been tested directly, and the ability of word similarity tasks to directly benefit from polysemic information must first be validated if they are to serve as genuine evaluation sets in polysemy research. | contrasting |
train_15925 | very, totally) strengthens the adjectives it modifies. | a de-intensifying adverb (e.g. | contrasting |
train_15926 | When using the linear kernel or cosine similarity, the estimator of PHSIC in feature space (14) is as follows: Generally in kernel methods, a feature map φ(•) induced by a kernel k(•, •) is unknown or highdimensional and it is difficult to compute estimated values in feature space 6 . | when we use the linear kernel or cosine similarity, feature maps can be explicitly determined (Equation 19). | contrasting |
train_15927 | First, information-theoretic MI (Cover and Thomas, 2006) and its variants (Suzuki et al., 2009;Reshef et al., 2011) are the most commonly used dependence measures. | to the best of our knowledge, there is no practical method of computing MIs for large-multi class high-dimensional (having a complex generative model) discrete data, such as sparse linguistic data. | contrasting |
train_15928 | To address the summarization alignment, former studies try to apply an attention mechanism to measure the saliency/novelty of each candidate word/sentence (Tan et al., 2017), with the aim of locating the most representative content to retain primary coverage. | toward summarizing a related work section, authors should be more creative when organizing text streams from the reference collection, where the selected content ought to highlight the topic bias of current work, rather than retell each reference in a compressed but balanced fashion. | contrasting |
train_15929 | It needs to remove the unnecessary information and select salient information from the input document to produce a condensed summary. | it is difficult for the basic encoderdecoder framework to learn the process of salient information selection, which has also been noticed by several previous work (Tan et al., 2017a,b). | contrasting |
train_15930 | ive text summarization aims to shorten long text documents into a human readable form that contains the most important facts from the original document. | the level of actual abstraction as measured by novel phrases that do not appear in the source document remains low in existing approaches. | contrasting |
train_15931 | (2016); Wang and Jiang (2017) explore a compare and aggregate framework to directly capture the wordby-word matching between two paired sentences. | these approaches suffer from the problem of high matching complexity, since a similarity matrix between pairwise words needs to be computed, and thus it is computationally inefficient or even prohibitive when applied to long sentences (Mou et al., 2016). | contrasting |
train_15932 | The adaptive context-aware filter generation mechanism proposed here bears close resemblance to attention mechanism (Yin et al., 2016;Bahdanau et al., 2015;Xiong et al., 2017) widely adopted in the NLP community, in the sense that both methods intend to incorporate rich contextual information into text representations. | attention is typically operated on top of the hidden units preprocessed by CNN or LSTM layers, and assigns different weights to each unit according to a context vector. | contrasting |
train_15933 | We attribute the improvement to two potential advantages of our AdaQA model: (i) for the two previous baseline methods, the interaction between question and answer takes place either before or after convolution. | in our AdaQA model, the communication between two sentences is inherent in the convolution operation, and thus can provide the abstracted features with more flexibility; (ii) the bidirectional filter generation mechanism in our AdaQA model generates co-dependent representations for the question and candidate answer, which could enable the model to recover from initial local maxima corresponding to incorrect predictions (Xiong et al., 2017). | contrasting |
train_15934 | The typical translation process is performed with either off-the-shelf machine translation (MT) systems or multilingual dictionaries (Nie, 2010). | mT based approaches are far from being a suitable solution for solving CLIR problems (refer to detailed analysis in (Zhou et al., 2012)). | contrasting |
train_15935 | A recent work (Gupta et al., 2017) tries to learn taskspecific representation for CLIR. | their model only captures ranking signals in monolingual settings, which does not necessarily generalize well in CLIR. | contrasting |
train_15936 | A recent work (Gupta et al., 2017) tries to learn task-specific embeddings for CLIR. | it learns ranking signals by preserving pairwise ranking in monolingual settings prior to a transfer learning process to another language, which does not necessarily generalize well in CLIR. | contrasting |
train_15937 | OT is a general mathematical toolbox used to evaluate correspondence-based distances and establish mappings between probability distributions, including discrete distributions such as point-sets. | the nature of mono-lingual word embeddings renders the classic formulation of OT inapplicable to our setting. | contrasting |
train_15938 | Furthermore, both Russian and Spanish are prodrop languages (Haspelmath, 2001) and share syntactic phenomena, such as dative subjects (Moore and Perlmutter, 2000;Melis et al., 2013) and differential object marking (Bossong, 1991), which might explain why ES is closest to RU overall. | english appears remarkably isolated from all languages, equally distant from its germanic (De) and romance (FR) cousins. | contrasting |
train_15939 | (2017b) also leverage optimal transport distances for the cross-lingual embedding task. | to address the issue of nonalignment of embedding spaces, their approach follows the joint optimization of the transportation and procrustes problem as outlined in Section 2.2. | contrasting |
train_15940 | In principle, one simply looks beyond single sentences for co-occurring entity pairs. | this can introduce many false positives and prior work used a small sliding window and filtering (minimal-span) to mitigate training noise. | contrasting |
train_15941 | The implicit assumption behind this method is that machine-generated attention should mimic human rationales. | rationales on their own are not adequate substitutes for machine attention. | contrasting |
train_15942 | Existing methods commonly adapt the classifier by aligning the representations between the source and target domains (Glorot et al., 2011;Chen et al., 2012;Zhou et al., 2016;Ganin et al., 2016;Zhang et al., 2017). | our model adapts the mapping from rationales to attention; thus after training, it can be applied to different target tasks. | contrasting |
train_15943 | 1 Deep learning models work best when trained on large amounts of labeled data. | acquiring labels is costly, motivating the need for effective semi-supervised learning techniques that leverage unlabeled examples. | contrasting |
train_15944 | This is similar to CVT in that it exposes the model to a restricted view of the input. | it is less data efficient. | contrasting |
train_15945 | The availability of large scale annotated corpora for coreference is essential to the development of the field. | creating resources at the required scale via expert annotation would be too expensive. | contrasting |
train_15946 | These include medium-scale multilingual datasets such as ONTONOTES (Pradhan et al., 2007;Weischedel et al., 2011), which led to the most recent evaluation campaigns, in particular CONLL (Pradhan et al., 2012, and are used in most current research (Björkelund and Kuhn, 2014;Martschat and Strube, 2015;Clark and Manning, 2016;Lee et al., 2017). | there are still many languages and domains for which no such resources are available, and even for English much larger corpora than ONTONOTES will eventually be required. | contrasting |
train_15947 | However, there are still many languages and domains for which no such resources are available, and even for English much larger corpora than ONTONOTES will eventually be required. | annotating data on the scale required to train state of the art systems using traditional expert annotation would be unaffordable. | contrasting |
train_15948 | 1 A second coreference corpus created using crowdsourcing (in the context of a trivia game) also exists, the Quiz Bowl dataset (Guha et al., 2015). | 2 such existing corpora are not widely used yet. | contrasting |
train_15949 | For this substask, most previous work (Poesio et al., 2004;Lassalle and Denis, 2011;Hou et al., 2013b) calculate semantic relatedness between an anaphor and its antecedent based on word co-occurrence counts using certain syntactic patterns. | such patterns only consider head noun knowledge and hence are not sufficient for bridging relations which require the semantics of modification. | contrasting |
train_15950 | Hou (2018) created word embeddings for bridging (embeddings PP) by exploring the syntactic structure of noun phrases (NPs) to derive contexts for nouns in the GloVe model. | embeddings PP only contains the word representations for nouns. | contrasting |
train_15951 | For embeddings PP, the result on using NP head + modifiers (31.67%) is worse than the result on using NP head (33.03%). | if we apply embeddings PP to a bridging anaphor's head and modifiers, and only apply embeddings PP to the head noun of an antecedent candidate, we get an accuracy of 34.53%. | contrasting |
train_15952 | Another new corpus for bridging is the second release of the ARRAU corpus, which contains 5,512 bridging pairs in three different domains (Poesio et al., 2018). | most bridging links in ARRAU are purely lexical bridging pairs, and only a small subset of the annotated pairs contains truly anaphoric bridging anaphors . | contrasting |
train_15953 | In parallel, more general work on common sense reasoning aims to develop a repository of common knowledge using semi-automatic methods (e.g., Cyc (Lenat, 1995) and ConceptNet (Liu and Singh, 2004)). | such knowledge bases are necessarily incomplete. | contrasting |
train_15954 | The man couldn't lift his son because he was so weak. | answer: the man (agent) Evidence and labels: "I was so weak that I couldn't lift" → Ea (query terms in bold) "She was so weak she couldn't lift" → Ea "I could not stand without falling immediately and I was so weak that I couldn't lift" → Ea "It hurts to lift my leg and its kind of weak" → EP Stats and resolution: agent evidence strength: 97 Patient evidence strength: 72 Number of scraped sentences: 109 Resolution: agent Table 5: Example Resolution for a WSC problem. | contrasting |
train_15955 | As a consequence, two vectors of the resulting space will be close if their corresponding nodes occur in topological proximity within the graph. | while such a topology allows perhaps for meaningful comparisons between points in this space, it is not directly compatible with the task of mapping text to entities. | contrasting |
train_15956 | The textual descriptions are treated as short documents, and each word in them is assigned a specific TF-IDF value, forming the set of textual features for the specific entity. | the KB graph is extended in the following way: Let t c be the set of textual features for an entity c; then, for each t in t c , we add an edge (c, t) with weight tf-idf c (t), where tf-idf c (t) is the tF-IDF value of t with respect to c. to Perozzi et al. | contrasting |
train_15957 | The purpose of the classification task ( §4.3) is to provide a direct comparison of the textually enhanced vectors against vectors produced by the original graph, but independently of the compositional part. | the text mapping experiments ( § §4.1, 4.2) evaluate the overall architecture of Fig. | contrasting |
train_15958 | More recently, several models (Shi and Weninger, 2017;Xie et al., 2016) have been proposed to handle unseen entities by leveraging text descriptions. | to these approaches, our model deals with long-tail or newly added relations and focuses on one-shot relational learning without any external information, such as text descriptions of entities or relations. | contrasting |
train_15959 | When evaluating existing embedding models, during training, we use not only the triples of background relations but also all the triples of the training relations and the one-shot training triple of those validation/test relations. | since the proposed metric model does not require the embeddings of query relations, we only include the triples of the background relations for embedding training. | contrasting |
train_15960 | These obstacles have motivated the work on automatic inference of REs (Banko et al., 2007;Li et al., 2008;Bartoli et al., 2018) where the objective it to develop approaches that are fast and deployable in real time. | the existing approaches tend to require large number of examples to cover both the alphabet and the possible syntactic patterns. | contrasting |
train_15961 | Recently, a KG embedding method which utilizes temporal scopes was proposed in (Jiang et al., 2016). | instead of directly incorporating time in the learned embeddings, the method proposed in (Jiang et al., 2016) first learns temporal order among relations (e.g., wasBorIn → wonPrize → diedIn). | contrasting |
train_15962 | We hypothesize that this phenomenon emerges due to TDNS (Equation 2). | in case of link prediction, we notice that the extra samples are affecting performance as they originate from the KG itself. | contrasting |
train_15963 | Recent research efforts have shown that neural architectures can be effective in conventional information extraction tasks such as named entity recognition, yielding state-of-the-art results on standard newswire datasets. | despite significant resources required for training such models, the performance of a model trained on one domain typically degrades dramatically when applied to a different domain, yet extracting entities from new emerging domains such as social media can be of significant interest. | contrasting |
train_15964 | (2012) show that domainspecific word embeddings tend to perform better when used in supervised learning tasks. | 1 maintaining such an improvement in the transfer learning process is very challenging. | contrasting |
train_15965 | We designed this experimental setup based on the following considerations: • Challenging: Newswire is a well-studied domain for NER and existing neural models perform very well (around 90.0 F1-score (Ma and Hovy, 2016)). | the performance drop dramatically in social media data (around 60.0 F-score (Strauss et al., 2016)). | contrasting |
train_15966 | Thus, the transferred information is diluted while we train the target model with more data. | our transfer method explicitly saves the transferred information in the base part of our target model, and we use separate learning rates to help the target model to preserve the transferred knowledge. | contrasting |
train_15967 | (Shen et al., 2017) proposed to link entity mentions to an HIN such as DBLP and IMDB. | their articles are collected from the Internet through searching and thus are not related to the target entities. | contrasting |
train_15968 | One of the first such attempt was made by (Wang, 2009) on 311 clinical notes from an Intensive Care Unit (ICU) department of the single hospital -Royal Prince Alfred Hospital (RPAH). | no specific guidelines on how that data were annotated are available. | contrasting |
train_15969 | Typically, NER models are built upon conditional random fields (CRF) with the IOB or IOBES tagging scheme Ma and Hovy, 2016;Lample et al., 2016;Ratinov and Roth, 2009;Finkel et al., 2005). | such design cannot deal with multi-label tokens. | contrasting |
train_15970 | More interestingly, compared to multi-word entity mentions, matched unigram entity mentions are more likely to be false-positive labels. | such false-positive labels will not introduce incorrect labels with the Tie or Break scheme, since either the unigram is a true entity mention or a false positive, it always brings two Break labels around. | contrasting |
train_15971 | (2016a) use a cross-lingual WIKIFIER to facilitate cross-lingual NER. | they do not explicitly address the case where the target entity does not exist in Wikipedia. | contrasting |
train_15972 | Most work in machine reading focuses on question answering problems where the answer is directly expressed in the text to read. | many real-world question answering problems require the reading of text not because it contains the literal answer, but because it contains a recipe to derive an answer together with the reader's background knowledge. | contrasting |
train_15973 | We generally assume that the question is underspecified, in the sense that the question often does not provide enough information to be answered directly. | an agent can use the supporting rule text to infer what needs to be asked in order to determine the final answer. | contrasting |
train_15974 | Complex Question Answering, in which the question corresponds to multiple triples in knowledge base, has attracted researchers' attentions recently (Bao et al., 2014;Xu et al., 2016;Berant and Liang, 2014;Bast and Haussmann, 2015;Yih et al., 2015). | most of existing solutions employ the predefined patterns or templates in understanding complex questions. | contrasting |
train_15975 | For example, the correct SQG of question "In which countries do people speak Japanese" has only two nodes "countries", "Japanese". | we recognized the phrase "people" as a variable node by mistake and connected it to both "countries" and "Japanese" to generate a wrong SQG Q S . | contrasting |
train_15976 | Character sequence would give information which helps to relieve the OOV problem, as many English words share the same stem and differ only in prefix or suffix. | this is not the case in Chinese, and we observe no significant improvement incorporating character-level embedding into our system. | contrasting |
train_15977 | Note that this formulation is in similar spirit to highway networks (Srivastava et al., 2015). | since our gating function is learned via reasoning over multi-granular sequence blocks, it captures more compositionality and long range context. | contrasting |
train_15978 | Unless stated otherwise, the encoder in the pointer layer for span prediction models also uses DCU. | for the Hybrid DCU-LSTM models, answer pointer layers use BiLSTMs. | contrasting |
train_15979 | When compared with a BiL-STM of equal output dimensions (150d), we find that our DCU model performs competitively, with less than 1% deprovement across all metrics. | the time cost required is significantly reduced. | contrasting |
train_15980 | Here, we note that Sim-DCU does not produce reasonable results at all, which seems to be in similar vein to results on SearchQA, i.e., a recursive cell that processes word-by-word is mandatory for span prediction. | our results show that it is not necessary to construct gates in a word-by-word fashion. | contrasting |
train_15981 | Our work is most similar to N2NMN (Hu et al., 2017) model, which learns both semantic operators and the layout in which to compose them. | optimizing the layouts requires reinforcement learning, which is challenging due to the high variance of policy gradients, whereas our chart-based approach is end-to-end differentiable. | contrasting |
train_15982 | As a result, a large percentage of CoQA answers are named entities or short noun phrases, much like those in SQuAD. | the asymmetric nature of forces students to ask more exploratory questions whose answers can be potentially be followed up on. | contrasting |
train_15983 | Dependency paths focus on hidden features at syntactic and functional perspective, which is a good complementary to sentential encoding results. | performances drop by 2.17 if only dependency information is used, we find that under certain dependency structures, crucial words (bolded) are not in the path between the answer and the focus mention (underlined), for example, "who did draco malloy end up marrying" and "who did the philippines gain independence from". | contrasting |
train_15984 | As for the second challenge, a robust model could extract precise relation features even from low-quality sentences containing noisy words. | previous neural methods are always lacking in robustness because parameters are initialized randomly and hard to tune with noisy training data, resulting in the poor performance of extractors. | contrasting |
train_15985 | Dependency trees help relation extraction models capture long-range relations between words. | existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or are computationally inefficient because it is difficult to parallelize over different tree structures. | contrasting |
train_15986 | Traditional feature-based models are able to represent dependency information by featurizing dependency trees as overlapping paths along the trees (Kambhatla, 2004). | these models face the challenge of sparse feature spaces and are brittle to lexical variations. | contrasting |
train_15987 | Another popular approach, inspired by Bunescu and Mooney (2005), is to reduce the parse tree to the shortest dependency path between the entities (Xu et al., 2015a,b). | these models suffer from several drawbacks. | contrasting |
train_15988 | It is therefore desirable to combine our GCN models with tree pruning strategies to further improve performance. | pruning too aggressively (e.g., keeping only the dependency path) could lead to loss of crucial information and conversely hurt robustness. | contrasting |
train_15989 | Traditionally, evaluation on SemEval is conducted without entity mentions masked. | as we will discuss in Section 6.4, this method encourages models to overfit to these mentions and fails to test their actual ability to generalize. | contrasting |
train_15990 | While the unmasked model achieves a 83.6 F 1 on the original SemEval dev set, F 1 drops drastically to 62.4 if we replace dev set entity mentions with a special <UNK> token to simulate the presence of unseen entities. | the masked model is unaffected by unseen entity mentions and achieves a stable dev F 1 of 74.7. | contrasting |
train_15991 | Attention mechanisms are often used in deep neural networks for distantly supervised relation extraction (DS-RE) to distinguish valid from noisy instances. | traditional 1-D vector attention models are insufficient for the learning of different contexts in the selection of valid instances to predict the relationship for an entity pair. | contrasting |
train_15992 | The latter introduces a multilingual framework which employs a monolingual attention mechanism to utilize the information within monolingual texts, and further uses a cross-lingual attention mechanism to consider the information consistency and complementarity among cross-lingual texts. | extra resources are difficult to obtain in many practical scenarios. | contrasting |
train_15993 | set r L2 to 9, but there are only one and two instances for selection in tasks One and Two, so the 2-D matrix cannot demonstrate its full potential. | in All, many entity pairs contain multiple or more than 9 instances, so it can learn a better 2-D matrix to focus on different instances. | contrasting |
train_15994 | In this example, the comma implies a semantic relationship location/location/contains for the entity pair (Maryland, Baltimore). | biGRU+2ATT allocates quite a small probability to it; and (2) we can see that our model focuses on different words via different attention vectors (9 in total). | contrasting |
train_15995 | The bi-directional DAG LSTM model showed superior performance over several strong baselines, such as tree-structured LSTM (Miwa and Bansal, 2016), on a biomedical-domain benchmark. | the bidirectional DAG LSTM model suffers from several limitations. | contrasting |
train_15996 | On the other hand, EM-BED assigns each edge label to an embedding vector, but complicates the gated operations by changing the U s to be 3D tensors. | 1 we take edge labels as part of the input to the gated network. | contrasting |
train_15997 | In particular, our model adopts the same methods for calculating input representation (as in Section 3.1) and performing classification as the baseline model. | different from the baseline bidirectional DAG LSTM model, we leverage a graph-structured LSTM to directly model the input graph, without splitting it into two DAGs. | contrasting |
train_15998 | By revisiting Table 2, we can see that the average number of tokens for the ternary-relation data is Train Decode Bidir DAG LSTM 281s 27.3s GS GLSTM 36.7s 2.7s 74, which means that the baseline model has to execute 74 recurrent transition steps for calculating a hidden state for each input word. | our model only performs 5 state transitions, and calculations between each pair of nodes for one transition are parallelizable. | contrasting |
train_15999 | The first case generally mentions that Gefitinib does not have an effect on T790M mutation on EGFR gene. | note that both "" and "was not" serve as indicators; thus incorporating them into the contextual vectors of these entity men- Single Cross 73.9 75.2 Miwa and Bansal (2016) 75.9 75.9 Peng et al. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.