id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_4200 | However, the method was a case of ignoring too many relevant words and accuracy was fluctuating in the mid-60% range, which is why we did not report the results. | it serves to further justify the choice of 5 words as the predicate window as fewer words caused the model to underperform. | contrasting |
train_4201 | Image representation learning has been successful via supervision from very large object-labeled datasets. | similar amounts of supervision are lacking for video representation learning. | contrasting |
train_4202 | Ensemble and multi-view learning were helpful for the Cookie Theft dataset, in which multi-view learning achieved the highest accuracy (65% of accuracy for narrative texts, a 3% of improvement compared to the best individual classifier). | neither multi-view or ensemble learning enhanced accuracy in the Cinderella dataset, where SVM-RBF with CNE space achieved the highest accuracy (65%). | contrasting |
train_4203 | it is considered that she is going to have a chat with the system. | if she says "Set an alarm at 8 o'clock," she is probably trying to operate her smartphone. | contrasting |
train_4204 | (I) Transforming grammatical roles into feature vectors: Grammatical roles are fed to our model as indices taken from a finite vocabulary V. In the simplest scenario, V contains {S, O, X, −}. | we will see in Section 3.1 that as we include more entity-specific features, V can contain more symbols. | contrasting |
train_4205 | In academia, work on generating human-like interaction focused so far on generating responses to tweets (Ritter et al., 2011;Hasegawa et al., 2013) or taking turns in short dialogs (Li et al., 2017). | the architectures assumed in these studies implement sequence to sequence (seq2seq) mappings, which do not take into account topics, sentiments or agendas of the intended responders. | contrasting |
train_4206 | We introduce an attention-based sequence learning model for the task and investigate the effect of encoding sentence-vs. paragraph-level information. | to all previous work, our model does not rely on hand-crafted rules or a sophisticated NLP pipeline; it is instead trainable end-to-end via sequenceto-sequence learning. | contrasting |
train_4207 | Our model successfully captures this information, while H&S only performs some syntactic transformation over the input without paraphrasing. | outputs from our system are not always "perfect", for example, in pair 6, our system generates a question about the reason why birds still grow, but the most related question would be why many species still grow. | contrasting |
train_4208 | Although existing summarization algorithms come up with a generic notion of importance, it is still far from the user-specific importance as shown in Figure 1. | humans can easily assess importance given a topic or a query. | contrasting |
train_4209 | Our JOINT model explores well in terms of prioritizing the concepts which yet lack user feedback. | it gives equal probabilities to all the unseen concepts. | contrasting |
train_4210 | (3) Adversarial Deep Averaging Network Similar to our approach, the adversarial Deep Averaging Network (ADAN) also exploits adversarial training for CLTC (Chen et al., 2016). | it does not have the parallel-corpus based knowledge distillation part (which we do). | contrasting |
train_4211 | We should also point out that in Table 2, the four baseline methods (PL-LSI, PL-KCCA, PL-OPCA and PL-MC) were evaluated under the condition of using additional 100 labeled target documents for training, according to the author's report (Xiao and Guo, 2013). | our methods (CLD-KCNN and CLDFA-KCNN) were evaluated under a tougher condition, i.e., not using any labeled data in the target domains. | contrasting |
train_4212 | The text-based features are extracted from the manual transcripts of the sessions, while the audio-based features are extracted from audio segments obtained by force-aligning each session transcript with its corresponding audio. | as future work, we are considering to automatize this process by conducting automatic speaker diarization and transcription via automatic speech recognition. | contrasting |
train_4213 | ", knowing that "Maigret" can refer to a TV show can greatly help disambiguate its meaning. | knowledge bases may hurt performance if used blindly. | contrasting |
train_4214 | Many attempts have been made on connecting distributed representations of KBs with text in the context of knowledge base completion (Lao et al., 2011;Gardner et al., 2014;Toutanova et al., 2015), relation extraction Chang et al., 2014;Riedel et al., 2013), and question answering (Miller et al., 2016). | these approaches model text using shallow representations such as subject/relation/object triples or bag of words. | contrasting |
train_4215 | In teaching videos of "back propagation", the concept "gradient descent" is frequently mentioned when illustrating the optimization detail of back propagation. | however, "back propagation" is unlikely to be mentioned when teaching "gradient descent". | contrasting |
train_4216 | For example, "data set" and "training set" have learning dependencies and the latter concept is more advanced than the former one. | "test set" and "training set" have no such relation when their complexity levels are similar. | contrasting |
train_4217 | Second, T-SRI has certain effectiveness for learning prerequisite relations, with F 1 ranging from 62.1 to 65.2%. | t-SRI only considers relatively simple features, such as the sequential and co-occurrence among concepts. | contrasting |
train_4218 | Different research lines have been proposed around this topic, including hypernym-hyponym relation extraction (Ritter et al., 2009;Wei et al., 2012), entity relation extraction (Zhou et al., 2006;Fan et al., 2014;Lin et al., 2015) and open relation extraction (Fader et al., 2011). | previous works mainly focus on factual relations, the extraction of cognitive relations (e.g. | contrasting |
train_4219 | Eisenstein and Barzilay (2008) discuss this within the context of topic segmentation. | 6 it is unclear if this would also would happen for POS tags; there is no syntactic analogue for the sort of lexical chains important in topic segmentation. | contrasting |
train_4220 | The training time of order-o CRFs grows exponentially (O(M o+1 )) with the number of output labels M , which is typically slow even for moderate-size training data if M is large. | the training time of order-o MEMMs is linear (O(M )) with respect to M independent of o, so it can handle larger training data with higher order of dependency. | contrasting |
train_4221 | (2015) also showed that joint learning of lemmas with other morphological attributes is mutually beneficial but obtaining the gold annotated datasets is very expensive. | our model needs only lemma annotated continuous text (not POS and other tags) to learn the word morphology. | contrasting |
train_4222 | While using the semantic embedding, only distributional word vectors are used for edit tree classification. | to test the effect of the syntactic embedding exclusively, output from the character level recurrent network is fed to the second level BGRNN. | contrasting |
train_4223 | Recent advances in neural networks provide strong representational power to language models with distributed representations and unbounded dependencies based on recurrent networks (RNNs). | most language models operate by generating words by sampling from a closed vocabulary which is composed of the most frequent words in a corpus. | contrasting |
train_4224 | This enables word types that are highly predictive in context to compete with the probability of a copy event. | since we are working with an open vocabulary, this strategy is unavailable in our model, so we use the MLP formulation. | contrasting |
train_4225 | Depending on the hardware support for these operations (repeated updates of recurrent states vs. softmaxes), our model may be faster or slower. | our model will have fewer parameters than a word-based model since most of the parameters in such models live in the word projection layers, and we use LSTMs in place of these. | contrasting |
train_4226 | These works are closely related to ours in that they use the technique of score function gradient estimators (Fu, 2006;Schulman et al., 2015) for stochastic learning. | the learning environment of Shen et al. | contrasting |
train_4227 | −BLEU(ỹ), over all input and output structures: In the case of full-information learning where reference outputs are available, we could evaluate all possible outputs against the reference to obtain an exact estimation of the loss function. | this is not feasible in our setting since we only receive partial feedback for a single output structure per input. | contrasting |
train_4228 | Since we want the model to learn to rank y i overỹ j , we would have to sampleỹ i word-byword from p + θ andỹ j from p − θ . | sampling all words ofỹ j from p − θ leads to translations that are neither fluent nor source-related, so we propose to randomly choose one position of y j where the next word is sampled from p − θ and sample the remaining words from p + θ . | contrasting |
train_4229 | (2014), yielding the following control variate: Note that for both types of control variates, (7) and (8), the expectationȲ is zero, simplifying the implementation. | the optimal scalarĉ has to be estimated for every entry of the gradient separately for the score function control variate. | contrasting |
train_4230 | The NMT bandit models that optimize the EL objective yield generally a much higher improvement over the out-of-domain models than the corresponding linear models: As listed in Table 4, we find improvements of between 2.33 and 2.89 BLEU points on the NC domain, and between 4.18 and 5.18 BLEU points on the TED domain. | the linear models with sparse features and hypergraph re-decoding achieved a maximum improvement of 0.82 BLEU points on NC. | contrasting |
train_4231 | As maximizing F (θ, q) involves minimizing the KL divergence, Ganchev et al. | 2010present a minorization-maximization algorithm akin to EM at sentence level: directly applying posterior regularization to neural machine translation faces a major difficulty: it is hard to specify the hyper-parameter b to effectively bound the expectation of features, which are usually real-valued in translation (Och and Ney, 2002;Koehn et al., 2003;Chiang, 2005). | contrasting |
train_4232 | Therefore, it is easy to use standard stochastic gradient descent algorithms to train our model. | a major difficulty in calculating gradients is that the algorithm needs to sum over all candidate translations in an exponential search space for KL divergence. | contrasting |
train_4233 | : risks making locally optimal decisions which are actually globally sub-optimal. | an exhaustive exploration of the output space would require scoring |v| T sequences, which is intractable for most real-world models. | contrasting |
train_4234 | With the recent success of neural models for text generation, beam search has become the de-facto choice for decoding optimal output sequences (Sutskever et al., 2014). | with neural sequence models, we cannot organize beams by their explicit coverage of the input. | contrasting |
train_4235 | By narrowing down the number of advertisements that law enforcement must sift through, we endeavor to provide a real opportunity for law enforcement to intervene in the lives of victims. | there are non-trivial challenges facing this line of research: Adversarial Environment. | contrasting |
train_4236 | Such sentences involving the attacker are often irrelevant since the annotations focus on the malware and not the attacker. | the above sentence implies that the malware is a remote administration tool and hence is a relevant sentence that implies malware capability. | contrasting |
train_4237 | By analyzing the hashes listed in each APT report, we obtain a list of signatures for the malware discussed in the report. | we are unable to obtain the signatures for several hashes due to restricted distribution of malware samples. | contrasting |
train_4238 | The current list of malware signatures from Cuckoo Sandbox 3 consists of 378 signature types. | only 68 signature types have been identified for the malware discussed in the 31 documents. | contrasting |
train_4239 | Additional information about revision purposes may elicit a stronger self-reflection response in Group A participants. | in Group B, there is a significant negative correlation between the number of Rev12 and ratings for the statement "it is convenient to view my previous revisions with the system" (ρ=-.36 and p < .05). | contrasting |
train_4240 | This suggests that the character-based interface is ineffective when participants have to reflect on many changes. | draft2 to draft3 Totals #Add #delete #Modify #Add #delete # Modify Content 294 179 33 320 27 16 869 Claims/Ideas 25 8 4 5 0 0 42 Warrant/Reasoning/Backing 166 83 7 191 13 3 463 Rebuttal/Reservation 23 1 0 13 0 0 37 General Content 50 80 18 86 13 13 260 Evidence 30 7 4 25 1 0 when comparing the number of revisions made by Group A and Group B on Rev23 (controlling for their Rev12 numbers), we did not find a significant difference. | contrasting |
train_4241 | Such approaches are able to discover homonymous senses of words, e.g., "bank" as slope versus "bank" as organisation (Di Marco and Navigli, 2012). | as the graphs are usually composed of semantically related words obtained using distributional methods (Baroni and Lenci, 2010;Biemann and Riedl, 2013), the resulting clusters by no means can be considered synsets. | contrasting |
train_4242 | Results across various configurations and methods indicate that using the weights based on the similarity scores provided by word embeddings is the best strategy for all methods except MaxMax on the English datasets. | its performance using the ones weighting does not exceed the other methods using the sim weighting. | contrasting |
train_4243 | RuWordNet is more domainspecific in terms of vocabulary, so our input set of generic synonymy dictionaries has a limited coverage on this dataset. | recall calculated on YARN is substantially higher as this resource was manually built on the basis of synonymy dictionaries used in our experiments. | contrasting |
train_4244 | That is, information about one predicate-argument relation could help to identify another predicate-argument relation. | to model such multi-predicate interactions, the joint approach in the previous studies relies heavily on syntactic information, such as part-of-speech (POS) tags and dependency relations predicted by POS taggers and syntactic parsers. | contrasting |
train_4245 | Overall, performance of both models gradually deteriorated as the number of predicates in a sentence increased, because sentences that contain many predicates are complex and difficult to analyze. | compared to the singlesequence model, the multi-sequence model suppressed performance degradation, especially for zero arguments (Zero). | contrasting |
train_4246 | The recently released MS Marco dataset (Nguyen et al., 2016) also contains independently authored questions and documents drawn from the search results. | the questions in the dataset are derived from search logs and the answers are crowdsourced. | contrasting |
train_4247 | However, the questions in the dataset are derived from search logs and the answers are crowdsourced. | trivia enthusiasts provided both questions and answers for our dataset. | contrasting |
train_4248 | We can see that ALIGN, SPME and MPME, achieve higher performance in dealing with semantic questions, because relations among entities (e.g., country-capital relation for entity France and Paris) enhance the semantics in word embeddings through jointly training. | their performance for syntactic questions is weakened because more accurate semantics yields a bias to predict semantic relations even though given a syntactic query. | contrasting |
train_4249 | This is because SPME learns word embeddings and entity embeddings in separate semantic spaces, and fails to measure the similarity between context words and candidate entities. | mPmE computes the similarity between context words with mention sense instead of entities, thus achieves the best performance, which also demonstrates the high quality of the mention sense embeddings. | contrasting |
train_4250 | Moreover, the full distribution provides much richer information than point estimates for characterizing words, representing probability mass and uncertainty across a set of semantics. | since a Gaussian distribution can have only one mode, the learned uncertainty in this representation can be overly diffuse for words with multiple distinct meanings (polysemies), in order for the model to assign some density to any plausible semantics (Vilnis and McCallum, 2014). | contrasting |
train_4251 | When the input x t is a negation word, the sentiment distribution should be shifted/reversed accordingly. | the negation role is more complex than that by sentiment words, for example, the word "not" in "not good" and "not bad" have different roles in polarity change. | contrasting |
train_4252 | The RNN outscores Moses in terms of PINC and PINC * sigmoid(BLEU), meaning that its interpretations are more novel, in terms of ngrams. | this alone might not be a negative trait; according to human judgments Moses performs better in terms of fluency, adequacy and sentiment, and so the novelty of the RNN's interpretations does not necessarily contribute to their Figure 1: An illustration of the application of SIGN to the tweet "How I love Mondays # sarcasm". | contrasting |
train_4253 | Existing active learning methods usually randomly select a set of unlabeled samples to annotate and then train the initial classifier on them (Settles, 2010). | these randomly selected samples may be redundant and not informative enough. | contrasting |
train_4254 | ILP relies on labeled samples to extract the relations among words and relations between words and sentiment expressions. | labeled samples in target domain are usually limited and the sentiment information in many unlabeled samples is not exploited in ILP. | contrasting |
train_4255 | The best performing results were achieved with two hidden layers (400 and 500 nodes respectively), tanh for activation function, and learning rate of 0.001 in gradient decent with early stopping. | the networks could not provide superior results than the SVM regressors. | contrasting |
train_4256 | In the context of multi-model learning, the method is referred to as early fusion. | late fusion approaches first learn a model on each feature set and then use/learn a meta model to combine their results. | contrasting |
train_4257 | An alternative metric, used in previous studies (Wang et al., 2013;Tsai and Wang, 2014;Kogan et al., 2009) is Mean Squared Error M SE = i (ŷ i − y i ) 2 /n. | especially when comparing models, applied on different test sets (e.g. | contrasting |
train_4258 | Network embedding (NE) is playing a critical role in network analysis, due to its ability to represent vertices with efficient low-dimensional embedding vectors. | existing NE models aim to learn a fixed context-free embedding for each vertex and neglect the diverse roles when interacting with other vertices. | contrasting |
train_4259 | 1), a social-media user contacts with various friends sharing distinct interests, and a web page links to multiple pages for different purposes. | most existing NE methods only arrange one single embedding vector to each vertex, and give rise to the following two invertible issues: (1) These methods cannot flexibly cope with the aspect transition of a vertex when interacting with different neighbors. | contrasting |
train_4260 | In conventional NE models, each vertex is represented as a static embedding vector, denoted as context-free embedding. | cANE assigns dynamic embeddings to a vertex according to different neighbors it interacts with, named as context-aware embedding. | contrasting |
train_4261 | The contextfree embedding of u remains unchanged when interacting with different neighbors. | the context-aware embedding of u is dynamic when confronting different neighbors. | contrasting |
train_4262 | The attention weights over A in edge #1 are assigned to "reinforcement learning". | the weights in edge #2 are assigned to "machine learning'", "supervised learning algorithms" and "complex stochastic models". | contrasting |
train_4263 | Our work is similar to these methods in using a neural network model for knowledge sharing between different languages. | ours is different in the use of a neural stacking model, which respects the distributional differences between Singlish and English words. | contrasting |
train_4264 | This simply involves a change of word orders and thus requires no special treatments. | tag questions should be carefully analyzed in two scenarios. | contrasting |
train_4265 | Similarly, we trained a POS tagger using the Singlish dependency treebank alone with pretrained word embeddings on The Singapore Component of the International Corpus of English (ICE-SIN) (Nihilani, 1992;Ooi, 1997), which consists of both spoken and written texts. | due to limited amount of training data, the Input layer Feature layer w S Figure 5: Base parser tagging accuracy is not satisfactory even with a larger dropout rate to avoid over-fitting. | contrasting |
train_4266 | However, due to limited amount of training data, the Input layer Feature layer w S Figure 5: Base parser tagging accuracy is not satisfactory even with a larger dropout rate to avoid over-fitting. | the neural stacking structure on top of the English base model trained on UD-Eng achieves a POS tagging accuracy of 89.50% 16 , which corresponds to a 51.50% relative error reduction over the baseline Singlish model, as shown in Table 2. | contrasting |
train_4267 | Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks. | in most cases, the recurrent network that operates on word-level representations to produce context sensitive representations is trained on relatively little labeled data. | contrasting |
train_4268 | (2017) are 3.97% for cross-lingual transfer from CoNLL 2002 Spanish NER and 6.28% F 1 for transfer from PTB POS tags. | they found only a 0.06% F 1 increase when using the full training data and transferring from both CoNLL 2000 chunks and PTB POS tags. | contrasting |
train_4269 | Current task-oriented dialogue systems (Young et al., 2013;Wen et al., 2017;Dhingra et al., 2017) require a pre-defined dialogue state (e.g., slots such as food type and price range for a restaurant searching task) and a fixed set of dialogue acts (e.g., request, inform). | human conversation often requires richer dialogue states and more nuanced, pragmatic dialogue acts. | contrasting |
train_4270 | In the above sentence, the arguments of the event include "He"(Role = P erson) and "hospital"(Role = P lace). | this paper does not focus on AE and only tackles the former task. | contrasting |
train_4271 | The correct type of the event triggered by "fired " in this case is End-Position. | it might be easily misidentified as Attack because "fired " is a multivocal word. | contrasting |
train_4272 | On the one hand, since joint methods simultaneously solve ED and AE, methods following this paradigm usually combine the loss functions of these two tasks and are jointly trained under the supervision of annotated triggers and arguments. | training corpus contains much more annotated arguments than triggers (about 9800 arguments and 5300 triggers in ACE 2005 dataset) because each trigger may be along with multiple event arguments. | contrasting |
train_4273 | Thus, the unbalanced data may cause joint models to favor AE task. | in implementation, joint models usually pre-predict several potential triggers and arguments first and then make global inference to select correct items. | contrasting |
train_4274 | On the one hand, strategy S1 only focuses on argument words, which provides accurate information to identify event type, thus ANN-S1 could achieve higher precision. | s2 focuses on both arguments and words around them, which provides more general but noised clues. | contrasting |
train_4275 | On the one hand, more positive training samples consequently make higher recall. | the extra event samples are automatically extracted from FN, thus false-positive samples are inevitable to be involved, which may result in hurting the precision. | contrasting |
train_4276 | In segLDAcop, topic 1, the top-ranked words are mostly relevant to the topic "date" (e.g., march, january, year, fall, february, week). | a similar topic learned by LDA appears to involve less such words (year, january, february), indicating a less coherent topic. | contrasting |
train_4277 | Given a question, we try to generate its correct semantic parse in a formal language that can be predefined by the choice of structured data source (e.g., SQL). | we push the burden of feature engineering to neural networks as in NP. | contrasting |
train_4278 | In the extreme case, the empirical distribution can be set directly as the cloud of points. | a vector representation reduces data significantly, and its effectiveness relies on the assumption that the discarded information is irrelevant or nonessential to later analysis. | contrasting |
train_4279 | Several techniques have been proposed that either extend word embedding models to cluster contexts and induce senses, usually referred to as unsupervised sense representations (Schütze, 1998;Reisinger and Mooney, 2010;Huang et al., 2012;Neelakantan et al., 2014;Guo et al., 2014;Tian et al., 2014;Šuster et al., 2016;Ettinger et al., 2016;Qiu et al., 2016) or exploit external sense inventories and lexical resources for generating sense representations for individual meanings of words Johansson and Pina, 2015;Jauhar et al., 2015;Iacobacci et al., 2015;Rothe and Schütze, 2015;Camacho-Collados et al., 2016b;Mancini et al., 2016;Pilehvar and Collier, 2016). | the integration of sense representations into deep learning models has not been so straightforward, and research in this field has often opted for alternative evaluation benchmarks such as WSD, or artificial tasks, such as word similarity. | contrasting |
train_4280 | The single model trained only on SQuAD is outperformed on all four of the datasets by the multitask model that uses distant supervision. | performance when training on SQuAD alone is not far behind, indicating that task transfer is occurring. | contrasting |
train_4281 | While it is essential for certain applications, such as machine translation, this characteristic also makes it slow to apply these models to scenarios that have long input text, such as document classification or automatic Q&A. | the fact that texts are usually written with redundancy inspires us to think about the possibility of reading selectively. | contrasting |
train_4282 | Finally, while training classifiers can be time consuming, when trained classifiers are deployed, feature extraction will dominate computation time over the classifier's lifetime. | the prediction step includes both feature extraction and computing inner products between features and weights. | contrasting |
train_4283 | In other words, its hidden layers are required to memorize the long-term dependencies and orders in the target language. | in our word-level decoder, the hidden state iterates only over the length of a chunk and then generates an end-of-chunk token. | contrasting |
train_4284 | The results show that even a standard word-based decoder has the ability to predict chunk boundaries if they are given in training data. | it is difficult for the word-based decoder to utilize the chunk information to improve the translation quality. | contrasting |
train_4285 | The decoders in the above studies can model the chunk structure by storing chunk pairs in a large table. | we do that by individually training a chunk generation model and a word prediction model with two RNNs. | contrasting |
train_4286 | 7In training, our goal is to find a set of source-totarget model parameters that minimizes the training objective: With learned source-to-target model parameterŝ θ x→y , we use the standard decision rule as shown in Equation 1to find the translationŷ for a source sentence x. | a major difficulty faced by our approach is the intractability in calculating the gradients because of the exponential search space of target sentences. | contrasting |
train_4287 | The results on test set for Eureparl Corpus are 32.24 BLEU over Spanish-French translation and 24.91 BLEU over German-French translation, which are slightly better than the sent-beam method. | considering the traing time and the memory consumption, we think mode approximation is already a good way to approximate the target sentence space for sentence-level teaching. | contrasting |
train_4288 | The results are in line with our observation in Table 2 that sentence-level KL divergence by beam approximation is smaller than that by greedy approximation. | as the Table 5: Comparison with previous work on Spanish-French translation in a zero-resource scenario over the WMT corpus. | contrasting |
train_4289 | (2016) consistently helps when used with our proposed tree encoders, with the bidirectional tree encoder remaining the best. | the improvements of the tree encoder models are smaller than that of the baseline system. | contrasting |
train_4290 | Previous work used name string match to propagate labels. | we apply self-training to label other mentions without links in Wikipedia articles even if they have different surface forms from the linked mentions (Section 2.4). | contrasting |
train_4291 | Researchers also observe that these regularities can transfer across languages. | previous endeavors to connect separate monolingual word embeddings typically require cross-lingual signals as supervision, either in the form of parallel corpus or seed lexicon. | contrasting |
train_4292 | seed lexicon) (Gouws and Søgaard, 2015;Wick et al., 2016;Duong et al., 2016;Shi et al., 2015;Mikolov et al., 2013a;Faruqui and Dyer, 2014;Lu et al., 2015;Ammar et al., 2016;Zhang et al., 2016aZhang et al., , 2017Smith et al., 2017). | our work completely removes the need for cross-lingual signals to connect monolingual word embeddings, trained on non-parallel text corpora. | contrasting |
train_4293 | These obtained high accuracies on well-formed text (e.g., news articles), which led to LD being considered solved (McNamee, 2005). | there has been renewed interest with the amount of user-generated content on the web. | contrasting |
train_4294 | There has also been document-level LD that assigns multiple language to each document (Prager, 1999;Lui et al., 2014). | documents were synthetically generated, restricted to inter-sentential language mixing. | contrasting |
train_4295 | Using word-level LD for English-Hindi (Gella et al., 2013), observed that as much as 17% of Indian Facebook posts had codeswitching, and showed that the native language is strongly preferred for expressing negative sentiment by English-Hindi bilinguals on Twitter. | without accurate multilingual word-level LD, there have been no largescale studies on the extent and distribution of code-switching across various communities. | contrasting |
train_4296 | Also note that there are many sentences available so that online training methods such as discriminative training of structured perceptrons can be used to learn structured predictors effectively in those settings. | for the cognates setting the unit at which there are structural constraints is the entire set of cognates for a language pair and there is only one such unit in existence (for a given language pair). | contrasting |
train_4297 | Previous work has yielded state of the art approaches that create a matrix of scores for all word pairs based on optimized weighted combinations of component scores computed on the basis of various helpful sources of information such as phonetic information, word context information, temporal context information, word frequency information, and word burstiness information. | when assigning a score to a word pair, the current state of the art methods do not take into account scores assigned to other word pairs. | contrasting |
train_4298 | RNN sequence-to-sequence models Bahdanau et al., 2015) are the state of the art for paradigm completion Kann and Schütze, 2016a;Cotterell et al., 2016a). | these models require a large amount of data to achieve competitive performance; this makes them unsuitable for out-of-thebox application to paradigm completion in the low-resource scenario. | contrasting |
train_4299 | The two metrics differ in that accuracy gives no partial credit and incorrect answers may be drastically different from the annotated form without incurring additional penalty. | edit distance gives partial credit for forms that are closer to the true answer. | contrasting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.