id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_15600 | For example, in biomedical domain, the number of unseen biomedical entity mentions (such as disease names, chemical names), their abbreviations or acronyms, as well as multiple names of the same entity is growing fast with the rapid increase of biomedical literatures and clinical records. | the performance of a learning based NER system relies heavily on data annotation, which is quite expensive. | contrasting |
train_15601 | Therefore, in many special domains, only trained models or APIs are available, while their training data are private and inaccessible. | due to insufficient labeled training data, deep models usually fail to behave normally in such domain, and state-of-the-art methods in these domains are usually dominated by rule based deductive methods or shallow model with hand-crafted features. | contrasting |
train_15602 | For this dataset, state value function of RLIE-A3C converged at 48 minutes. | from Figure 4, we observe that accuracies converge to the final values much before that. | contrasting |
train_15603 | Motivated by the observation that named entities are highly related to linguistic constituents, we propose a constituent-based BRNN-CNN for named entity recognition. | to classical sequential labeling methods, the system first identifies which text chunks are possible named entities by whether they are linguistic constituents. | contrasting |
train_15604 | Finally, the system makes predictions for a sentence by collecting the constituents whose most probable classes are named entity classes. | nodes whose ancestors are already predicted as named entities are ignored to prevent predicting overlapping named entities. | contrasting |
train_15605 | Given these features, O(D) prediction is simple and parallelizable across the length of the sequence. | feature extraction may not necessarily be parallelizable. | contrasting |
train_15606 | It may also have better sample complexity, as it imposes more prior knowledge about the structure of the interactions among the tags (London et al., 2016). | it has worse computational complexity than independent prediction. | contrasting |
train_15607 | These outperform similar systems that use the same features, but independent local predictions. | the greedy sequential prediction (Daumé III et al., 2009) approach of Ratinov and Roth (2009), which employs lexicalized features, gazetteers, and word clusters, outperforms CRFs with similar features. | contrasting |
train_15608 | In order to disambiguate "India" to the correct entity, a linking system would need to utilize both the local context (played and match), and the document context (to identify the sport). | the model needs to represent context such that the semantics are preserved, e.g. | contrasting |
train_15609 | (2005) released a corpus covering four different kinds of OCRed text comprising German and Bulgarian. | in 2017 the corpus was untraceable and no recent re-search relating to the data could be found. | contrasting |
train_15610 | the 17th century or named entities cannot be found in a dictionary and can therefore not be covered with any of the approaches mentioned above. | especially named entities are crucial for the automatic or semi-automatic analysis of narratives e.g. | contrasting |
train_15611 | We henceforth call this unknown set test unk (text 6). | the second set contains parts of the same texts as the training, thus specific vocabulary might have been introduced already. | contrasting |
train_15612 | Given a sequence [x 1 , x 2 , ..., x T ] where x t is the input embedding of element t, the state of Bi-GRU at position t is: where h f t and h bt are the states of the forward and backward GRU at position t. The final sequence embedding is either the concatenation of h f T and h b1 or simply the average of h t . | attentive Sequence Encoder directly using [h f T , h b1 ] for sequence encoding often fails to capture all the information when the sequence is long, while using the average of h t also has the drawback of treating useless elements equally with informative ones. | contrasting |
train_15613 | The improvements made by using relevant law articles actually indicates the nature of the civil law system that judgements are made based on statutory laws. | if we only use the extracted relevant articles to make prediction (SVM art and NN art 4 ), the performance becomes worse. | contrasting |
train_15614 | In LSA, entropy is not as directly applicable: vectors in W = U Σ can be arbitrarily realnumbered. | we still want to access a similar basic concept, the amount a vector representation of a document is concentrated in a few dimensions. | contrasting |
train_15615 | However, the amount of decay depends on the proportion of the corpus replicated: the smaller the proportion size, the more dramatic the decay. | the loss for singular documents increases only slightly with more copies, though more for higher proportions of duplication. | contrasting |
train_15616 | Figure 6 demonstrates that, as before, perplexity is significantly higher as template repetition increases when there is a small number of topics K = 5. | as the number of topics in- Figure 6: LDA training perplexity for REUSL 2.5k with different types of templated text repetition. | contrasting |
train_15617 | For LSA, templated repetition has no apparent effect on the loss of untemplated texts. | the effect for templated texts is less straightforward. | contrasting |
train_15618 | Repeating one document will likely only affect one or a few components regardless of how many repetitions occur. | if there are many different repeated documents, more components will be used to model them, which worsens the model fit more as the number of unique repeated texts increases. | contrasting |
train_15619 | In a topic model, it may be easy to identify the templated text based upon it appearing in one topic. | if there is a concern that there is systematic use of text templates in documents (such as page headers or publication information) that may be too close to the language model, the ngram removal approach inspired by Citron and Ginsparg (2015) is an expensive but straightforward way to ensure these strings are detected and deleted. | contrasting |
train_15620 | Existing models may lack signals for the following outlier detection steps and hence cannot be directly plugged in. | it is possible to adapt certain models to the outlier detection task. | contrasting |
train_15621 | The semantic regions learned from the Embedded vMF Allocation model provide a set of candidates frequently mentioned by documents in the corpus. | not all of them are semantic focuses of the corpus -some are too general to distinguish outlier and normal document. | contrasting |
train_15622 | To estimate this, we first need to calculate the probability of each word being drawn from the semantic focuses. | it is then possible to estimate the expected percentage of words not drawn from semantic focuses in each document as the outlierness: due to the noisiness in text data, this assumption oversimplifies the characterization of outlier documents. | contrasting |
train_15623 | Figure 1 shows an example of causal features that temporally causes Facebook's stock rise in August. | it is difficult to understand how the statistically verified factors actually cause the changes, and whether there is a latent causal structure relating the two. | contrasting |
train_15624 | A simple choice for traversing a graph are the traditional graph searching algorithms such as Breadth-First Search (BFS). | the graph searching procedure is likely to be incomplete (low recall), because simple string match is insufficient to match an effect to all its related entities, as it misses out in the case where an entity is semantically related but has a lexically different name. | contrasting |
train_15625 | By this way, the relevant videos to the threads can be recommended to learners, and the chaotic threads in discussion forums can also be well grouped. | it is challenging to identify the relevant video clips for threads without labeling data. | contrasting |
train_15626 | To our knowledge, (Jiang et al., 2017) also proposes an unsupervised learning model (called NOSE) for the task of thread-subtitle matching within MOOC settings. | nOSE needs to build a heterogeneous textual network beforehand and may suffer from heterogeneous issue, which our model can avoid. | contrasting |
train_15627 | Choices for privacy controls, which are the most actionable pieces of information in these documents, are frequently "hidden in plain sight" among other information. | the nature of the text and the vocabulary used to present choices provide us with an opportunity to automatically identify choices, a goal that we focus upon in this paper. | contrasting |
train_15628 | The unbalanced distribution of the opt-out labels allowed us to manually verify and correct labels in the positive class. | correcting errors in the much larger negative class (of 12K instances) was a challenge, since comprehensive manual verification was infeasible. | contrasting |
train_15629 | The BCT model serves as a state-of-the-art transition-based baseline. | it requires that the training data contains both syntax trees and disfluency annotations, which reduces the practicality of the algorithm. | contrasting |
train_15630 | 4 We can see that AMU16 SMT is the current state of the art on CoNLL, with an F 0.5 of 49.49. | cAMB16 SMT generalises better on FcE and JFLEG: 52.90 and 52.44 F 0.5 respectively. | contrasting |
train_15631 | A selectively chosen example is the replacement from "discontinous" to "discontinuous", which never occurs in training. | similar errors of low edit distance also occur once in the dev set and never in training, but the CHAR+BI+DOM model filtered against the NUCLE corpus, hurt effectiveness for the phrase-based models. | contrasting |
train_15632 | Similarly, we solved 75% of the Inconsistency errors including lexical, tense and definiteness (definite or indefinite articles) cases. | we also observe that our system brings relative 21% new errors. | contrasting |
train_15633 | In this case, transfer learning methods still show better accuracies than target-only approaches on average. | the performance gain is weakened compared to using 1,280 labeled training sentences and there are some mixed results. | contrasting |
train_15634 | When training with only 32 tag-labeled sentences, which is an extremely low-resourced setting, transfer learning methods still showed better accuracies than target-only methods on average. | not using the common BLSTM in transfer learning models showed better performance than using it on average. | contrasting |
train_15635 | Recently Lin and Parikh (2016) have leveraged visual question answering models to encode images and descriptions into the same space. | all of this work is targeted at monolingual descriptions, i.e., mapping images and descriptions in a single language onto a joint embedding space. | contrasting |
train_15636 | Linguistic parsers can also be too slow for real-time applications. | an RNN can detect entities in the question with high accuracy and low latency. | contrasting |
train_15637 | (2015) similarly assume that the answer to a question is at most two hops away from the target entity. | they do not propose how to obtain the target entity, since it is provided as part of their dataset. | contrasting |
train_15638 | These inner products are negative, indicating that the context vectors point in the opposite direction from the word vectors. | the GloVe context vectors have essentially the same relationship to the mean of the word vectors as the word vectors themselves. | contrasting |
train_15639 | From a memory perspective, one multilingual BTS model will take less space than separate FF models. | from a runtime perspective, a pipeline of our models doing language identification, word segmentation, and then POS tagging would still be faster than a single instance of the deep LSTM BTS model, by about 12x in our FLOPs estimate. | contrasting |
train_15640 | (2016) considered relation between opinion words and aspect words in a supervised model named RNCRF. | rNCrF tends to suffer from parsing errors since the structure of the recursive network hinges on the dependency parse tree. | contrasting |
train_15641 | CMLA (Wang et al., 2017a) used a multilayer neural model where each layer consists of aspect attention and opinion attention. | cMLA merely employs standard GRU without extended memories. | contrasting |
train_15642 | For example, domain adaptation is studied for sentiment classification (Glorot et al., 2011) and parsing (McClosky et al., 2010), just to name a few. | there is very little work on domain adaptation for word embedding learning. | contrasting |
train_15643 | Embeddings learned from such an approach were shown to be able to improve the performance on a cross-domain sentiment classification task. | this model fails to learn embeddings for many words which are neither pivots nor non-pivots, which could be crucial for some downstream tasks such as named entity recognition. | contrasting |
train_15644 | Interestingly, the concatenation approach appears to be competitive in this task, especially for the Spanish dataset, which appears to be better than the DARep approach. | we note such an approach does not capture any information transfer across different domains in the learning process. | contrasting |
train_15645 | However, we note such an approach does not capture any information transfer across different domains in the learning process. | our approach learns embeddings for the target domain by capturing useful cross-domain information and therefore can lead to improved modeling of embeddings that are shown more helpful for this specific down-stream task. | contrasting |
train_15646 | This makes both the information integration and stopping criteria welldefined. | in our focused reading domain, we do not know ahead of time which new pieces of information are necessarily relevant and must be taken in context. | contrasting |
train_15647 | Ozbal and Strapparava (2012) generate new words to describe a product given its category and properties. | their method is limited to handcrafted rules as compared to our data driven approach. | contrasting |
train_15648 | (2017) have proposed an approach to recommend brand names based on brand/product description. | they consider only a limited number of features like memorability and readability. | contrasting |
train_15649 | Unfortunately, practitioners who are qualified to diagnose and treat serious mental health issues such as schizophrenia are in chronically short supply, and their accumulated knowledge cannot be easily formalized into reproducible metrics (Patel et al., 2007). | clinical research into the symptoms and mechanisms of schizophrenia suggests that disturbances in language use, and especially in metaphor use and affect, characterize schizophrenia. | contrasting |
train_15650 | Hoax stories tend to use fewer superlatives and comparatives. | compared to other types of fake news, propaganda uses relatively more assertive verbs and superlatives. | contrasting |
train_15651 | By examining the results, we find an overall crossparty agreement of 46% regarding the discussed issues. | this agreement varies substantially if we consider the different macro-domains. | contrasting |
train_15652 | The possibility of measuring agreement at a finer level (topics) that is offered by our approach, shows, for example, that between 2004 and 2012 two opposite positions have been defined regarding the Middle East. | there has been a general agreement on the role of the U.S. concerning the relations with Europe. | contrasting |
train_15653 | These approaches extract concept and relation labels from syntactic structures and connect them to build a concept map. | common task definitions and comparable evaluations are missing. | contrasting |
train_15654 | Other types of information representation that also model concepts and their relationships are knowledge bases, such as Freebase (Bollacker et al., 2009), and ontologies. | they both differ in important aspects: Whereas concept maps follow an open label paradigm and are meant to be interpretable by humans, knowledge bases and ontologies are usually more strictly typed and made to be machine-readable. | contrasting |
train_15655 | Their approach can be interpreted as Q-BOT 'forgetting' the task after interacting with A-BOT. | this behavior of Q-BOT to remember the task only during dialog but not while predicting is somewhat unnatural compared to our setting. | contrasting |
train_15656 | Studies have shown that self expression and social support are beneficial in improving the individual's state of the mind (Turner et al., 1983;Choudhury and Kiciman, 2017) and thus such communities and interventions are important in suicide prevention. | there are often thousands of user posts published in such support forums daily, making it difficult to manually identify individuals at risk of self-harm. | contrasting |
train_15657 | Aside from the effort required to design effective features, these approaches usually model the problem with respect to the selected features and ignore other indicators and signals that can improve prediction. | our model only relies on text and is not dependent on any external or domain-specific features. | contrasting |
train_15658 | Previous selfreported diagnosis detection datasets contained a limited number of both control users and diagnosed users. | to this, we construct a new dataset with over 9,000 depressed users matched with a realistic number of control users. | contrasting |
train_15659 | Modern statistical learning approaches capture correlations among output variables in order to make coherent predictions. | for realworld applications, some implicit correlations are not appropriate, especially if they are amplified. | contrasting |
train_15660 | In practice, it is hard to obtain a solution where all corpus-level constrains are satisfied. | we show that the performance of the proposed approach is empirically strong. | contrasting |
train_15661 | The original dataset includes about 125,000 images with 75,702 for training, 25,200 for developing, and 25,200 for test. | the dataset covers many non-human oriented activities (e.g., rearing, retrieving, and wagging), so we filter out these verbs, resulting in 212 verbs, leaving roughly 60,000 of the original 125,000 images in the dataset. | contrasting |
train_15662 | It might be necessary to sacrifice some accuracy in order to satisfy privacy requirements. | this is not the case of all private information, since some of it is not relevant for the prediction of the text label. | contrasting |
train_15663 | In this paper we explore the following situation: (i) a main classifier uses a deep network to predict a label from textual data; (ii) an attacker eavesdrops on the hidden layers of the network and tries to recover information about the input text of unseen examples. | to previous work about neural networks and privacy (Papernot et al., 2016;Carlini et al., 2018) we do not protect the privacy of examples from the training set, but the privacy of unseen examples provided, e.g., by a user. | contrasting |
train_15664 | If its accuracy is high, then an eavesdropper can easily recover information about the input document. | if its accuracy is low (i.e. | contrasting |
train_15665 | In more details, for age, the adversary is well over the baseline in all cases except US. | gender seems harder to predict: the adversary outperforms the most frequent class baseline only in the +DEMO setting. | contrasting |
train_15666 | However, they only use a single adversary to alter the training of the main model and to evaluate the privacy of the representations, with the risk of overestimating privacy. | once the parameters of our main model are fixed, we train a new classifier from scratch to evaluate privacy. | contrasting |
train_15667 | These trends are persistent for all main-task/protected-attribute pairs we tried. | training the attacker network on the resulting encoder vectors reveals a different story. | contrasting |
train_15668 | 6 We are interested in tweets whose protected attribute (race) is correctly predicted by the adversary. | at accuracy rates below 60%, many of the correct predictions could be attributed to chance. | contrasting |
train_15669 | Recent work on creating private representation in the text domain (Li et al., 2018) share our motivation of removing unintended demographic attributes from the learned representation using adversarial training. | they report only the discrimination accuracies of the adversarial component, and do not train another classifier to verify that the representations are indeed clear of the protected attribute. | contrasting |
train_15670 | Recent approaches counter this deficit by considering external sources related to a claim. | these methods require substantial feature modeling and rich lexicons. | contrasting |
train_15671 | 2016 learned a part-of-speech (POS) tagger for their data and constructed a word level translation phrasebook to map emojis and slang to the Dictionary of Affect in Language (DAL) in order to identify their emotion. | to Blevins' translation approach, we leverage our large unlabeled dataset to automatically induce resources, such as word embeddings, that function well within the domain of our task. | contrasting |
train_15672 | 2018b's research and reflects the fact that emotional states may fluctuate often and within a certain number of days. | word embeddings improved consistently as we extended the context window from 2 days to 90 days. | contrasting |
train_15673 | We expect our methods to be generalizable because we compute embeddings and lexicons from neighborhood-specific data and do not rely on large, hand-crafted resources such as dictionaries. | we hope to test generalizability in future work by applying our methods to other gang-related corpora, because there is variation in language, local concepts, and behavior across gangs. | contrasting |
train_15674 | 2018), and NPN -has focused on learning to predict individual entity states at various points in the text, thereby approximating the underlying dynamics of the world. | while these models can learn to make local predictions with fair accuracy, their results are often globally unlikely or inconsistent. | contrasting |
train_15675 | The commonsense constraints we have used for ProPara are general, covering the large variety of topics contain in ProPara (e.g., electricity, photosynthesis, earthquakes). | if one wants to apply ProStruct to other genres of procedural text (e.g., fictional text, newswire articles), or broaden the state change vocabulary, different commonsense constraints may be needed. | contrasting |
train_15676 | (2017) argue that Multi-NLI "[makes] it possible to evaluate systems on nearly the full complexity of the language." | how well does Multi-NLI test a model's capability to understand the diverse semantic phenomena captured in DNC? | contrasting |
train_15677 | The results are flipped on the two datasets focused on downstream tasks (Sentiment and RE) and MV. | the differences between pre-training on the DNC or Multi-NLI are small. | contrasting |
train_15678 | A first step toward grounded commonsense inference with today's deep learning machinery is to create a large-scale dataset. | recent work has shown that human-written datasets are susceptible to annotation artifacts: unintended stylistic patterns that give out clues for the gold labels (Gururangan et al., 2018;Poliak et al., 2018). | contrasting |
train_15679 | 13 JOCI increases the scale by generating the hypotheses using a knowledge graph or a neural model. | to JOCI where the task was formulated as a regression task on the degree of plausibility of the hypothesis, we frame commonsense inference as a multiple choice question to reduce the potential ambiguity in the labels and to allow for direct comparison between machines and humans. | contrasting |
train_15680 | Wang (2017) release a larger dataset for fake news detection, and propose a hybrid neural network to integrate the statement and the speaker's meta data to do classification. | the presentation of evidences is ignored. | contrasting |
train_15681 | embeddings) than text-based models. | most existing models still have a number of drawbacks. | contrasting |
train_15682 | There is also work using deep learning methods to project different modality inputs into a common space, including restricted Boltzman machines (Ngiam et al., 2011;Srivastava and Salakhutdinov, 2012), autoencoders (Silberer and Lapata, 2014;Silberer et al., 2016), and recursive neural networks (Socher et al., 2013). | the above methods can only generate multimodal vectors of those words that have perceptual information, thus reducing multimodal vocabulary drastically. | contrasting |
train_15683 | Some recent work has investigated static image-based dialogue. | several real-world human interactions also involve dynamic visual context (similar to videos) as well as dialogue exchanges among multiple speakers. | contrasting |
train_15684 | (2014) used visual display information for on-screen item resolution in utterances for improving personal digital assistants. | we propose to employ dynamic video-based information as visual context knowledge in dialogue models, so as to move towards video-grounded intelligent assistant applications. | contrasting |
train_15685 | dog, sit, red) to the decoder instead of the image features, and is more effective in image captioning according to the evaluation on benchmark datasets. | the models based on conceptual information have a major drawback that it is hard for the model to associate the details with the specific objects in the image, because the visual words are inherently unordered in semantics. | contrasting |
train_15686 | The advantages are that the attention no longer needs to find the relevant generic regions by itself but instead find relevant bounding boxes that are object orientated and can serve as semantic guides. | the drawback is that predicting bounding boxes is difficult, which requires large datasets (Krishna et al., 2017) and complex models (Ren et al., 2015(Ren et al., , 2017a. | contrasting |
train_15687 | of approach can be seen as the extension of the earlier template-based slotting-filling approaches (Farhadi et al., 2010;Kulkarni et al., 2013). | few work studies how to combine the two kinds of attention models to take advantage of both of them. | contrasting |
train_15688 | α t ∈ R k is the attentive weights of V and the attentive visual input z t ∈ R g is calculated as The visual input z t and the embedding of the previous output word y t−1 are the input of the LSTM. | there is a noticeable drawback that the previous output word y t−1 , which is a much stronger indicator than the previous hidden state h t−1 , is not used in the attention. | contrasting |
train_15689 | ble 5, when extracting all words, providing more words to the model indeed increases the captioning performance. | even when top-20 all words are used, the performance is still far behind using only top-5 object words and seems to reach the performance ceiling. | contrasting |
train_15690 | Merging Gate Combing the visual attention and the topic attention directly indeed results in a huge boost in performance, which confirms our motivation. | directly combining them also causes lower scores in attributes, color, count, and size, showing that the advantages are not fully made use of. | contrasting |
train_15691 | With TEF, the performance of MCN can be significantly improved. | it is still inferior to our proposed TGN. | contrasting |
train_15692 | We test a state-of-the-art system (Peters et al., 2018) on PreCo and get an F1 score of 81.5. | a modest human performance (87.9, which will be described in 4.1 ) is much higher, verifying there remain challenges. | contrasting |
train_15693 | There are also specificities in each task. | existing methods for Chinese NER either do not exploit word boundary information from CWS or cannot filter the specific information of CWS. | contrasting |
train_15694 | In order to incorporate word boundary information from CWS task into NER task, Peng and Dredze (2016) propose a joint model that performs Chinese NER with CWS task. | their proposed model only focuses on task-shared information between Chinese NER and CWS, and ignores filtering the specificities of each task, which will bring noise for both of the tasks. | contrasting |
train_15695 | According to the results of Table 2 and Table 4, our proposed model achieves 4.67% and 1.43% improvement as compared with previous stateof-the-art methods on WeiboNER dataset and SighanNER dataset, respectively. | the overall performance on WeiboNER dataset is relatively low. | contrasting |
train_15696 | In this paper, we show that employing linguistic features in a neural coreference resolver significantly improves generalization. | the incorporated features should be informative enough to be taken into account in the presence of lexical features, which are very strong features in the CoNLL dataset. | contrasting |
train_15697 | Outputs of one model can also serve as features to the next model. | such an approach cannot model overlapping mentions of the same type, which frequently appear in practice. | contrasting |
train_15698 | In both of the previous approaches, their models would make local predictions and assign both "A" and "B" as left boundaries, and both "C" and "D" as right boundaries. | based on such local predictions one could also interpret "A B C" as a mention -this is where the ambiguity arises. | contrasting |
train_15699 | However, based on such local predictions one could also interpret "A B C" as a mention -this is where the ambiguity arises. | our model enjoys the structural ambiguity free property as it uses our newly defined I nodes (together with X nodes) to jointly capture the complete boundary information of mentions. | contrasting |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.