id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_5100 | As in the mental health setting, these studies have largely considered the behavior of interlocutors in a single conversation, tying it to outcomes such as persuasion, problemsolving and incivility (Curhan and Pentland, 2007;Rosenthal and McKeown, 2015;Wang et al., 2017;Zhang et al., 2018), or to social correlates such as power, gender and age (Gonzales et al., 2010;Ireland et al., 2011;Danescu-Niculescu-Mizil et al., 2012;Nguyen et al., 2013;Prabhakaran et al., 2014). | our work seeks to chart the development of linguistic tendencies of individuals as they engage in many conversations. | contrasting |
train_5101 | This could suggest that as counselors encounter more diverse situations, they become linguistically better equipped to approach them in a specialized fashion. | in conversational components which have functions that generalize across texter problems there is no systematic increase in diversity (hello), and there is even a slight decrease (goodbye)-perhaps indicating that counselors develop routines to close the conversation. | contrasting |
train_5102 | With shared contextual encoder representation consisting of character and word embedding based models, the proposed architecture provides an effective solution for knowledge transfer across tasks, thus consolidating the ability to perform well on unseen text. | this proposed ar- chitecture is not scalable, the number of decoders scales linearly with the number of attributes. | contrasting |
train_5103 | Indeed, we show a very high performance of the baseline by length over UPKCon-vArg in §5.1. | a data set that includes only evidence poses a more challenging task. | contrasting |
train_5104 | In addition, the pre-processing step which generates the huge set of linguistic features, used by Habernal and Gurevych (2016a) and Simpson and Gurevych (2018), takes a lot of time and is not suitable for many languages. | our network does not depend on task specific features, and still achieves state of the art results on the task of argument convincingness classification and ranking. | contrasting |
train_5105 | We find there words related to argumentation (argue, claim), or studies and polls (found, ures (DR., Clinton, W. Bush) or court orders (supreme, v. 8 ). | when subtracting the true convincing distribution from the true nonconvincing distribution (Figure 2b) one gets opinion words (support, opposes, vote), partial change (reduce, amend, part), non-emphasized actions (said, proposed, concern). | contrasting |
train_5106 | If we argue against the DC (Con stance), we may choose any plausible alternative as our EC, following a line of argument such as "EC is a better alternative to DC". | when we argue in favor of DC (Pro stance), the typical argument changes to "if we don't choose DC then we are left with EC", which requires EC to be the "default" alternative to DC. | contrasting |
train_5107 | Among others, emotions are encoded in both lexical and acoustic information where each modality contributes to the overall emotional state of a given speaker. | in some situations, one modality can be more insightful to derive emotions than the other. | contrasting |
train_5108 | Previous studies usually extract features from the modalities in independent modules, and then they concatenate the corresponding utterance representations from the acoustics and lexical features to feed into the next layers of their models. | we argue that a more natural way to understand emotions is to align lexical and acoustic information, which simulates the way humans process both modalities simultaneously. | contrasting |
train_5109 | For negative sampling in the contrastive loss function, we empirically found that using anger with happiness and neutral with sadness generally worked well since the acoustic patterns are similar. | we saw some informative pairs when happiness and anger were coupled with neutral. | contrasting |
train_5110 | This framework was followed by most of the recent studies in this field Gui et al., 2016a;Li et al., 2018;Xu et al., 2019;Yu et al., 2019). | there are two shortcomings in the current ECE task. | contrasting |
train_5111 | In addition to a document as the input, ECE needs to provide annotated emotion at first before cause extraction. | the output of our ECPE task is a pair of emotion-cause, without the need of providing emotion annotation in advance. | contrasting |
train_5112 | Till now, two component Bi-LSTM at the upper layer are independent to each other. | as we have mentioned, the two sub-tasks (emotion extraction and cause extraction) are not mutually independent. | contrasting |
train_5113 | It is clear that by removing the emotion annotations, the F1 score of CANN drops dramatically (about 34.69%). | our method does not need the emotion annotations and achieve 65.07% in F1 measure, which significantly outperforms the CANN-E model by 27.1%. | contrasting |
train_5114 | On the one hand, its goal is not direct. | the mistakes made in the first step will affect the results of the second step. | contrasting |
train_5115 | This kind of argument can be made, mutatis mutandis, when debating quite different topics, such as legalizing organ trade, or banning pornography. | it is not always relevant when debating whether to legalize or ban something; For example, when debating whether to legalize polygamy or ban breastfeeding in public, the black market argument seems less appropriate. | contrasting |
train_5116 | A somewhat similar approach is suggested in work of Walton andGordon (2017, 2018), where arguments are constructed from a database (Reed and Rowe, 2004) of smaller argumentative building blocks. | these building blocks are topic-specific and can not readily provide arguments for topics not in the database. | contrasting |
train_5117 | We observe that LexRank tends to extract long and comprehensive sentences that yield high graph centrality; the abstractive pointer-generator networks, despite the promising results, can sometimes fail to generate meaningful summaries (e.g., "a third of all 3-year-olds • • • have been given to a child"). | our DPP method is able to select a balanced set of representative and diverse summary sentences. | contrasting |
train_5118 | It was also applied for optimizing the neural summarization model for headline generation with respect to ROUGE (Ayana et al., 2017), which is based on an overlap of words with reference summaries (Lin, 2004). | how to use MRT under a length constraint was an open problem; thus we propose a global optimization under length constraint (GOLC) for neural summarization models. | contrasting |
train_5119 | Models trained with GOLC showed better ROUGE scores than those of maximum loglikelihood based methods while generating summaries satisfying the length constraint. | to the approximately 20% and 50% of overlength summaries generated by the other state-ofthe-art models, our method only generated 6.70% of overlength summaries on CNN/Daily and 7.8% on long summary of Mainichi while improving ROUGE scores. | contrasting |
train_5120 | Each Q θ (y − |x) for summaries shorter than the length constraint increases because∆(y, y − ) < 0. | each Q θ (y + |x) for overlength summaries decreases be-cause∆(y, y + ) > 0. | contrasting |
train_5121 | Since short summary lengths distribute approximately 10 to 17, lengths of summaries generated by PG (MLE), which does not has capability of controlling summary length, are less than the length constraint 17. | pG w/ LE (MLE) and pG w/ LE (GOLC) tend to generate as the same length of summary as the length constraint. | contrasting |
train_5122 | Specifically, these work instantiate their encoder-decoder framework by choosing recurrent neural networks (Cheng and Lapata, 2016;Nallapati et al., 2017;Zhou et al., 2018) as encoder, auto-regressive decoder (Chen and Bansal, 2018;Jadhav and Rajan, 2018;Zhou et al., 2018) or non auto-regressive decoder (Isonuma et al., 2017;Narayan et al., 2018;Arumae and Liu, 2018) as decoder, based on pre-trained word representations (Mikolov et al., 2013;Pennington et al., 2014). | how to use Transformer in extractive summarization is still a missing issue. | contrasting |
train_5123 | 5, we can find that contextindependent word representations can not contribute much to current models. | when the models are equipped with BERT, we are excited to observe that the performances of all types of architectures are improved by a large margin. | contrasting |
train_5124 | For instance, structural features like centrality and repetitions are still among the most used proxies for Importance (Yao et al., 2017;Kedzie et al., 2018). | such features just correlate with Importance in standard datasets. | contrasting |
train_5125 | Information theory provides the means to rigorously discuss the abstract concept of information, which seems particularly well suited as an entry point for a theory of summarization. | information theory concentrates on uncertainty (entropy) about which message was chosen from a set of possible messages, ignoring the semantics of messages (Shannon, 1948). | contrasting |
train_5126 | A simple model could be the uniform distribution over known information: n if the user knows ω i , and 0 otherwise. | k can control other variants of the summarization task: A personalized k p models the preferences of a user by setting low probabilities to the semantic units of interest. | contrasting |
train_5127 | Single document summarization (SDS) systems have benefited from advances in neural encoder-decoder model thanks to the availability of large datasets. | multidocument summarization (MDS) of news articles has been limited to datasets of a couple of hundred examples. | contrasting |
train_5128 | The three source documents discuss the same event and contain overlaps in content: the fact that Meng Wanzhou was arrested is stated explicitly in Source 1 and 3 and indirectly in Source 2. | some sources contain information not mentioned in the others which should be included in the summary: Source 3 states that (Wanzhou) is being sought for extradition by the US while only Source 2 mentioned the attitude of the Chinese side. | contrasting |
train_5129 | (2018) trains abstractive sequence-to-sequence models on a large corpus of Wikipedia text with citations and search engine results as input documents. | no analogous dataset exists in the news domain. | contrasting |
train_5130 | A potential problem is that the applicability of a high-quality operator is often restricted. | one can increase the application probability by selecting not only the top-ranked n-gram but a small number of different near-top n-grams. | contrasting |
train_5131 | This indicates that the meaning becomes a strong clue to assign a domain to the document. | in the implicit semantic space created by using the neural language model such as the Word2Vec, a word is represented as one vector even if it has several senses. | contrasting |
train_5132 | A restriction of the subject domain makes the problem of polysemy less problematic. | even in texts from a restricted subject domain such as Wall Street Journal corpus (Douglas and Janet, 1992), one encounters quite a large number of polysemous words. | contrasting |
train_5133 | Most of these approaches demonstrated that neural network models are powerful for learning effective features from textual input. | most of them for learning word vectors only allow a single context-independent representation for each word even if it has several senses. | contrasting |
train_5134 | Among the load of works like assigning reviewers, ensuring timely receipt of reviews, slot-filling against the non-responding reviewer, taking informed decisions, communicating to the authors, etc., editors/program chairs are usually overwhelmed with many such demanding yet crucial tasks. | the major hurdle lies in to decide the acceptance and rejection of the manuscripts based on the reviews received from the reviewers. | contrasting |
train_5135 | Moreover, even a preliminary study into the inners of the peer review system is itself very difficult because of data confidentiality and copyright issues of the publishers. | the silver lining is that the peer review system is evolving with the likes of OpenReviews 2 , author response periods/rebuttals, increased effective communications between authors and reviewers, open access initiatives, peer review workshops, review forms with objective questionnaires, etc. | contrasting |
train_5136 | (2018) relies on elementary handcrafted features extracted only from the paper; does not consider the review features whereas we include the review features along with the sentiment information in our deep neural architecture. | we also find that our approach with only Review+Sentiment performs inferior to the Paper+Review method in Kang et al. | contrasting |
train_5137 | Furthermore, the corruption mechanism is nothing other than traditional dropout on the input layer. | coupled with the word2vec-style loss and training methods, this paper offers little on the novelty front.it is very efficient at generation time, requiring only an average of the word embeddings rather than a complicated inference step as in Doc2Vec. | contrasting |
train_5138 | With further exploration, we aim to mould the ongoing research to an efficient AI-enabled system that would assist the journal editors or conference chairs in making informed decisions. | considering the sensitivity of the topic, we would like to further dive deep into exploring the subtle nuances that leads into the grading of peer review aspects. | contrasting |
train_5139 | The new end-to-end speech recognition approach (Graves et al., 2006;Graves and Jaitly, 2014;Hannun et al., 2014;Miao et al., 2015;Chorowski et al., 2015;Chan et al., 2016; integrates all available information within a single neural network model, allows to make fusing conversational-context information possible. | these are limited to encode only one preceding utterance and learn from a few hundred hours of annotated speech corpus, leading to minimal improvements. | contrasting |
train_5140 | Several recent studies have considered to embed a longer context information within a end-toend framework Kim and Metze, 2019). | with our method which can learn a better conversationalcontext representation with a gated network that incorporate external word/sentence embeddings from multiple preceding sentence history, their methods are limited to learn conversationalcontext representation from one preceding sentence in annotated speech training set. | contrasting |
train_5141 | Table 1: SEQ vs. CAMB system results on words only and on words and phrases systems achieve similar results: the SEQ model beats the CAMB model only marginally, with the largest difference of +1.05% on the WIKINEWS data. | it is worth highlighting that the CAMB system does not perform any phrase classification per se and simply marks all phrases as complex. | contrasting |
train_5142 | The topic information of news is critical for learning accurate news and user representations for news recommendation. | it is not considered in many existing news recommendation methods. | contrasting |
train_5143 | Thus, the performance improves when λ increases from 0. | when λ becomes too large, the performance of our approach declines. | contrasting |
train_5144 | Some graphic design applications such as Adobe Spark 1 perform automatic text layout using templates that include images and text with different fonts and colors. | their text layout algorithms are mainly driven by visual attributes like word length, rather than the semantics of the text or the user's intent, which can lead to unintended emphasis and the wrong message. | contrasting |
train_5145 | Keyword detection mainly focuses on finding important nouns or noun phrases. | social media text is much shorter, and users tend to emphasize a subset of words with different roles to convey specific intent. | contrasting |
train_5146 | From Figure 2, it can be seen that, despite small improvements, different featurizations provide similar APS values. | focusing more on the early parts of the PRC, i.e., high precision, we can see that there is a significant improvement of recall. | contrasting |
train_5147 | Up to now, the model is completely agnostic of sequence positions. | position information is crucial in natural language, so a mechanism to represent such information in the model is needed. | contrasting |
train_5148 | A simple way to realize self-attentional modeling for lattice inputs would be to linearize the lattice in topological order and then apply the above model. | such a strategy would ignore the lattice structure and relate queries to keys that cannot possibly appear together according to the lattice. | contrasting |
train_5149 | Below, we demonstrate that the binary masking approach ( § 4.1.1) is biased such that computed representations are impacted by path duplication. | the probabilistic approach ( § 4.1.2) is invariant to path duplication. | contrasting |
train_5150 | For the sequential lattice with binary mask, it is: Here, C is the softmax normalization term that ensures that exponentiated similarities sum up to 1. | the lattice with duplication results in a doubled influence of v a : The probabilistic approach yields the same result as the binary approach for the sequential lattice (Equation 11). | contrasting |
train_5151 | For lexical cohesion, Wong and Kit (2012) count the ratio between the number of repeated and lexically similar content words over the total number of content words in a target document. | guillou (2013); Carpuat and Simard (2012) find that translations generated by a machine translation system tend to be similarly or more lexically consistent, as measured by a similar metric, than human ones. | contrasting |
train_5152 | This paradigm is widely used in Multi-NMT systems due to simple implementation and convenient deployment. | this paradigm has two drawbacks. | contrasting |
train_5153 | As shown in part II, Multi-NMT Baselines underperform the NMT Baselines on all cases. | our method performs better than NMT Baselines, and it achieves the improvement up to 1.76 BLEU points on Nl→It translation task. | contrasting |
train_5154 | Both statistical (Lample et al., 2018b;Artetxe et al., 2018b) and neural (Artetxe et al., 2018c;Lample et al., 2018a) MT approaches were proposed which are promising directions to overcome the data sparsity problem. | various issues of the approaches still have to be solved, e.g., better word reordering during translation or tuning system parame-ters. | contrasting |
train_5155 | our conservative approach also misses true parallel pairs resulting in a significant drop in recall. | we argue that precision is more important for downstream tasks, since noise in the data often hurts performance. | contrasting |
train_5156 | In addition, it mentions US presidency related entities twice, once as Clinton and once confusing it with Obama. | by using parallel sentences the results are more fluent and accurate. | contrasting |
train_5157 | The empirical results in this section show that the quality of pre-trained UBWE is important to UNMT. | the quality of UBWE decreases significantly during UNMT training. | contrasting |
train_5158 | Because there are many shared subwords in these similar language pairs, this method achieves better performance than other UBWE methods. | this initialization does not work for distant language pairs such as English-Japanese and English-Chinese, where there are few shared subwords. | contrasting |
train_5159 | This indicates that a larger dictionary size helps estimate a better UBWE agreement. | larger dictionary size did not always obtain a higher BLEU as shown in Table 3. | contrasting |
train_5160 | A simple but commonly used solution is pivoting: we first translate from Spanish to English, and then from English to French with two separately trained single-pair models or a single multilingual model. | it comes with two drawbacks: (1) at least 2× higher latency than that of a comparable direct translation model; (2) the models used in pivot-based translation are not trained taking into account the new language pair, making it difficult, especially for the second model, to cope with errors created by the first model. | contrasting |
train_5161 | After training, one obtains the optimized W and then easily infers word alignment for a test sentence pair x, y via the MAP strategy as defined in equation 3 by setting Note that if word embeddings and hidden units learned by NMT models capture enough information for word alignment, the above method can obtain excellent word alignment. | because the dataset for supervision in training definitely include some data intrinsic word alignment information, it is unclear how much word alignment is only from NMT models. | contrasting |
train_5162 | Previous experiments mainly consider the alignment for the reference, and show that EAM is better at aligning a reference word to source words than PD. | in order to better understand the translation process of a NMT model, it is helpful to analyze the alignment of real translations derived from the NMT model itself. | contrasting |
train_5163 | The main reason is that EAM relies on an external aligned dataset with supervision from statistical word aligner FAST ALIGN, and thus the characteristic of its alignment result are similar to that of FAST ALIGN, leading to the understanding biased to FAST ALIGN. | pD only relies on prediction from a neural model to define the relevance, it has been successfully used to understand and interpret a neural model (Zintgraf et al., 2017). | contrasting |
train_5164 | As shown in Figure 4(a), as a functional word, 'a' should not be aligned to any source word. | in Figure 4(b) PD incorrectly aligned 'a' to 'háng hǎi jīa' by only considering the contributions from the source side, and this leads to a misunderstanding for why 'a' is translated. | contrasting |
train_5165 | (2017) first developed a nonautoregressive NMT system which produces the outputs in parallel and the inference speed is thus significantly boosted. | it comes at the cost that the translation quality is largely sacrificed since the intrinsic dependency within the natural language sentence is abandoned. | contrasting |
train_5166 | Several others use reinforcement learning (RL) to develop an agent to predict read and write decisions (Satija and Pineau, 2016;Gu et al., 2017;Alinejad et al., 2018). | due to computational challenges, they pre-train an NMT model on full sentences and then train an agent that sees the fixed NMT model as part of its environment. | contrasting |
train_5167 | At the lowest comparable latency (4 tokens), MILk is 1.5 BLEU points ahead of wait-k. EnFr is a much easier language pair: both MILk and wait-k maintain the BLEU of full attention at lags of 10 tokens. | we were surprised to see that this does not mean we can safely deploy very low ks for wait-k; its quality drops off surprisingly quickly at k = 8 (DAL=8.4, BLEU=39.8). | contrasting |
train_5168 | The omitted BLEU/DAL pairs are 19.5/2.5 for DeEn and 28.9/2.9 for EnFr, both of which trade very large losses in BLEU for small gains in lag. | wait-k's ability to function at all at such low latencies is notable. | contrasting |
train_5169 | We observe that, in the early stage of training, the validation loss of RNN decreases faster than Transformer. | it starts to overfit soon. | contrasting |
train_5170 | By considering the reasoning patterns, one can discover that Luc Besson could speak English following a reasoning logic that Luc Besson directed Léon: The Professional and this film is in English indicates Luc Besson could speak English. | most existing GNNs can only process multi-hop relational reasoning on pre-defined graphs and cannot be directly applied in natural language relational reasoning. | contrasting |
train_5171 | 3 that as the number of layers grows, the curves get higher and higher precision, indicating considering more hops in reasoning leads to better performance. | the improvement of the third layer is much smaller on the overall distantly supervised test set than the one on the dense subset. | contrasting |
train_5172 | In practical scenario, relation extraction needs to first identify entity pairs that have relation and then assign a correct relation class. | the number of non-relation entity pairs in context (negative instances) usually far exceeds the others (positive instances), which negatively affects a model's performance. | contrasting |
train_5173 | Strictly speaking, most existing studies of relation extraction treat the task as relation classification. | relation extraction often comes with an extremely imbalanced dataset where the number of non-relation entity pairs far exceeds the others, making it a more challenging yet more practical task than relation classification. | contrasting |
train_5174 | One possible reason is that the all models are closely related to each other. | how they affect each other in this joint settings is still an open question. | contrasting |
train_5175 | For S3, our "GCN" identifies a PHYS relation between "[units] PER " and "[captial] GPE ", while the "NN" does not find this relation even the entities are correct. | both models do not identify the relation ART between "[units] PER " and "[weapons] WEA ". | contrasting |
train_5176 | The BERT model has been successfully applied to various NLP tasks. | the final prediction layers used in the original model is not applicable to MRE tasks. | contrasting |
train_5177 | Note that the system (Rotsztejn et al., 2018) integrates many techniques like feature-engineering, model combination, pretraining embeddings on in-domain data, and artificial data generation, while our model is almost a direct adaption from the ACE architecture. | compared to the top singlemodel result (Luan et al., 2018), which makes use of additional word and entity embeddings pretrained on in-domain data, our methods demonstrate clear advantage as a single model. | contrasting |
train_5178 | Adding structured prediction layer to BERT (i.e., BERT SP ) also leads to a similar amount of improvement. | the gap between BERT SP method with and without entity-aware attention is small. | contrasting |
train_5179 | In a supervised setting, discriminative approaches, such as deep neural network classifiers, have demonstrated substantial improvement. | these models are hard to train without supervision, and the currently proposed solutions are unstable. | contrasting |
train_5180 | Most relation extraction systems need to be trained on a labeled dataset. | human annotation is expensive, and virtually impractical when a large number of relations is involved. | contrasting |
train_5181 | Later, the OpenIE approaches (Banko et al., 2007;Angeli et al., 2015) relied upon the hypothesis that the surface form of the relation conveyed by a sentence appears in the path between the two entities in its dependency tree. | these latter works are too dependent on the raw surface form and suffer from bad generalization. | contrasting |
train_5182 | paraphrases, relation aliases, and entity types (Vashishth et al., 2018). | we observe that these models are often biased towards recognizing a limited set of relations with high precision, while ignoring those in the long tail (see Section 5.2). | contrasting |
train_5183 | PCNN+ATT has the highest average precision at 94.3%, 3% higher than the 91.2% of RE-SIDE and 5% higher than our model. | we see that this is mainly due to PCNN+ATT's very high P@100 and P@200 scores. | contrasting |
train_5184 | Similarly, our approach predicts a larger set of distinct relation types with high confidence among the top predictions. | to RESIDE, which uses explicitly provided side information and linguistic features, our approach only utilizes features implicitly captured in pre-trained language representations. | contrasting |
train_5185 | Instead, we use a recently released sentence-level test set (Ren et al., 2017) for evaluation. | there also exist several problems in this test set (see Sec. | contrasting |
train_5186 | To automatically obtain a large training dataset, DS has been proposed (Mintz et al., 2009). | dS also introduces noisy data, making dS-based RC more challenging. | contrasting |
train_5187 | They use Bi-LSTM on each sentence for automatic feature learning, and the extracted hidden features are shared by a sequential entity tagger and a shortest dependency path relation classifier. | while introducing shared parameters for joint entity recognition and relation extraction, they must still pipeline the entity mentions predicted by the tagger to form mention pairs for the relation classifier. | contrasting |
train_5188 | (2018) improved the performance of the vanilla LSTM (Hochreiter and Schmidhuber, 1997) by utilizing RL to discover structured representations and interpreted the sentiment prediction of neural models by employing RL to find the decision-changing phrases. | nRE models are unique because we only care about the semantic inter-entity relation mentioned in the sentence. | contrasting |
train_5189 | We conjecture that this behavior is due to the gap between maximizing the likelihood of the NRE model and the ground-truth instance selection. | dIAG-NRE can contribute both stable and interpretable improvements with the help of human-readable patterns. | contrasting |
train_5190 | The extracted named entities can benefit various subsequent NLP tasks, including syntactic parsing (Koo and Collins, 2010), question answering (Krishnamurthy and Mitchell, 2015) and relation extraction (Lao and Cohen, 2010). | accurately recognizing representative entities in natural language remains challenging. | contrasting |
train_5191 | Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks. | the existing pre-trained language models rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better language understanding. | contrasting |
train_5192 | From the results, we observe that: (1) BERT achieves comparable results with NFGEC on the macro and micro metrics. | bERT has lower accuracy than the best NFGEC model. | contrasting |
train_5193 | Following the success of KG representation learning, recent work embeds entities in different KGs into a low-dimensional vector space with the help of seed alignments (Chen et al., 2017). | the limited seeds and structural differences take great negative impacts on the quality of KG embeddings, which performs alignment poorly. | contrasting |
train_5194 | While other neural architectures for graphs exist, we believe that GGNN is more suitable for the Chinese NER task for its better capability of capturing the local textual information compared to other GNNs such as GCN (Kipf and Welling, 2017). | the traditional GGNN (Li et al., 2016) is unable to distinguish edges with different labels. | contrasting |
train_5195 | These models can potentially generate any sequence, and are thus sometimes referred to as open vocabulary. | they tend to underperform compared to word level models when trained on the same data. | contrasting |
train_5196 | For this section, we restrict ourselves to predicting characters and bigrams for simplicity. | our approach is straightforward to apply to n-grams or words. | contrasting |
train_5197 | Neural language models are typically trained by predicting the next word given a past context (Bengio et al., 2003). | natural sentences are not constructed as simple linear word sequences, as they usually contain complex syntactic information. | contrasting |
train_5198 | For example, in Figure 4b, the TCN model assigned the highest syntactic height to the word "market" and induced the phrase "expect a rough market" for the context "the fund managers". | in a ground-truth dependency tree, the verb "expect" is the word directly connected to the root node and therefore has the highest syntactic height. | contrasting |
train_5199 | θ is randomly and uniformly sampled from [−π, π]. | our early experiments show that, at least within the context of NLP applications, this initialization performed comparable or worse than the standard Glorot initialization. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.