id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_16500
The projected representations for the same sense are similar, the Euclidean distance is 0.23 (row 2,3).
for the different senses, the Euclidean distance is 1.12 (row 1,2).
contrasting
train_16501
Among them, deep learning-based methods such as CNN (Liu et al., 2017;Kurata et al., 2016), RNN (Liu et al., 2016), combination of CNN and RNN (Lai et al., 2015;Chen et al., 2017), attention mechanism (Yang et al., 2016;You et al., 2018), (Adhikari et al., 2019) and etc., have achieved great success in document representation.
most of them only focus on document representation but ignore the correlation among labels.
contrasting
train_16502
In dense data, each label has sufficient documents, therefore, self-attention can sufficiently obtain label-specific document representation.
label text is helpful to extract the semantic relations between labels and documents.
contrasting
train_16503
The dominant text classification models in deep learning (Kim, 2014;Zhang et al., 2015a;Yang et al., 2016; require a considerable amount of labeled data to learn a large number of parameters.
such methods may have difficulty in learning the semantic space in the case that only few data are available.
contrasting
train_16504
In text classification, Nguyen (2018) shows that automatic evaluation based on word deletion moderately correlate with human-grounded evaluations that ask crowdworkers to infer machine predictions based on explanations.
explanations that help humans infer machine predictions may not actually help humans make better decisions/predictions.
contrasting
train_16505
Similarly, Serrano and Smith (2019) show that attention is not a fail-safe indicator for explaining machine predictions based on intermediate representation erasure.
wiegreffe and Pinter (2019) argue that attention can be explanation depending on the definition of explanations (e.g., plausibility and faithfulness).
contrasting
train_16506
If this were true, the choice of model/method would have mattered little for visualizing important features.
a low similarity poses challenges for choosing which model/method to use for displaying important features.
contrasting
train_16507
Consistent with H3b, adjectives are more important in sentiment classification than in deception detection.
to our hypothesis, we found that pronouns do not always play an important role in deception detection.
contrasting
train_16508
CNN wrongly captures key phrases British Energy and the nuclear generator and thus misclassifies the example into World.
our Attention-CNN is able to correctly classify it into Business.
contrasting
train_16509
These benchmarks typically assume large annotated training sets, little mismatch between training and test distributions, relatively clean data, and a lack of adversarial examples (Zue et al., 1990;Marcus et al., 1993;Deng et al., 2009;Lin et al., 2014).
when conditions are not ideal for discriminative classifiers, generative classifiers can actually perform better.
contrasting
train_16510
Recent work in neural networks has shown that introducing latent variables leads to higher representational capacity (Kingma and Welling, 2014;Chung et al., 2015;Burda et al., 2016;Ji et al., 2016).
unlike variational autoencoders (Kingma and Ba, 2015) and related work that use continuous latent variables, our model is more similar to recent efforts that combine neural architectures with discrete latent variables and end-to-end training (Ji et al., 2016;Kim et al., 2017;Kong et al., 2017;Chen and Gimpel, 2018;Wiseman et al., 2018, inter alia).
contrasting
train_16511
The hidden topic model extracts topics purely relying on the reviewer profile, so the topic vectors can be regarded as a summary of the reviewers' research interest.
the common topic vectors are selected based on the knowledge of both the submission and the reviewer's profile, which are expected to capture the topical overlap between the two.
contrasting
train_16512
Haxby et al., 2001;Kriegeskorte et al., 2006).
the full power of linear decoding with fMRI remains unknown within language neuroscience and elsewhere.
contrasting
train_16513
While these brain mapping studies have detected particular summary features of syntactic computation in the brain, these summary features do not constitute complete proposals of syntactic processing.
each of the models trained in this paper constitutes an independent candidate algorithmic description of sentence representation.
contrasting
train_16514
Some of the discovered inconsistencies, despite being factually incorrect, could be rationalized by humans.
in many cases, the errors were substantial and could have severe repercussions if presented as-is to target readers.
contrasting
train_16515
Lead-3 is a strong baseline that exploits the described layout bias.
there is still a large gap between its performance and an upper bound for extractive models (extractive oracle).
contrasting
train_16516
When focusing on the neural pipeline approaches, we found that in all the steps up to Text Structuring, the recurrent networks retained more information than the Transformer.
68% of the Transformer's text trials contained all the input triples, against 67% of the GRU's trials.
contrasting
train_16517
As shown in Figure 1, BERTScore (precision/recall) can be intuitively viewed as hard alignments (one-to-one) for words in a sentence pair, where each word in one sequence travels to the most semantically similar word in the other sequence.
moverScore goes beyond BERTScore as it relies on soft alignments (manyto-one) and allows to map semantically related words in one sequence to the respective word in the other sequence by solving a constrained optimization problem: finding the minimum effort to transform between two texts.
contrasting
train_16518
Apparently, strict matches on surface forms seems reasonable for extractive summarization datasets.
we still see that our word mover metrics, i.e., WMD-1+BERT+MNLI+PMeans, perform better than or come close to even the supervised metric S 3 best .
contrasting
train_16519
We speculate that current contextualizers are poor at representing named entities like hotels and place names as well as numbers appearing in system and reference texts.
best correlation is still achieved by our word mover metrics combining contextualized representations.
contrasting
train_16520
The major improvements come from contextualized BERT embeddings rather than word2vec and ELMo, and from fine-tuning BERT on large NLI datasets.
we also observed that soft alignments (MoverScore) marginally outperforms hard alignments (BERTScore).
contrasting
train_16521
Its selector is largely deterministic, with a lowest entropy value among all models.
the selector from SS, VRS and Bo.Up.
contrasting
train_16522
This multilingual model still relies on copying to relate each labeled output sentence to its corresponding input counterpart.
unlike (i), it has the advantage of sharing parameters among languages.
contrasting
train_16523
The frames define a common, prototypical argument structure while at the same time providing new concept-specific information.
to PropBank, which defines enumerative semantic roles, VerbAtlas comes with an explicit, cross-frame set of semantic roles linked to selectional preferences expressed in terms of WordNet synsets, and is the first resource enriched with semantic information about implicit, shadow, and default arguments.
contrasting
train_16524
Its application goes well beyond the annotation of corpora: in fact, it was also adopted for the Abstract Meaning Representation (Banarescu et al., 2013), a semantic language that aims at abstracting away from cross-lingual syntactic idiosyncrasies, and NomBank (Meyers et al., 2004), a resource which provides argument structures for nouns.
propBank's major drawback is that its roles do not explicitly mark the type of semantic relation with the verb, instead they just enumerate the arguments (i.e., Arg0, Arg1, etc.).
contrasting
train_16525
Second, the weights in SIF (and uSIF) are calculated from the statistic of vocabularies on very large corpus (wikipedia).
the weights in GEM are directly computed from the sentences themselves along with the dataset, independent with prior statistical knowledge of language or vocabularies.
contrasting
train_16526
The image shows him riding a bike, indicating that he was riding a bike when he tweeted thus he was in possession of a bike.
if the picture were a screenshot of his Twitter posting statistics, Arnold most likely would not be in possession of a bike when tweeting, but rather sharing a log of his previous trips with his followers.
contrasting
train_16527
The seen test set consists of dialogues set in the same world (set of locations) as the training set, thus also consists of characters, objects, and personas that can appear in the training data.
the unseen test set is com-Category: Graveyard Description: Two-and-a-half walls of the finest, whitest stone stand here, weathered by the passing of countless seasons.
contrasting
train_16528
The standard approach to tackle this issue has been to make certain design choices explicit, such as to enforce a particular policy with respect to overlapping mentions, or common entities, etc., when labeling an EL dataset or performing evaluation.
the appropriate policy may depend on the particular application, setting, etc.
contrasting
train_16529
The first three should be tagged when the mentions are equal (Michael Jackson), shorter (Jackson) or longer (Michael Joseph Jackson), respectively, than the primary label of their corresponding KB-entity (wiki:Michael Jackson).
alias is used for mentions that vary from the primary label of the KB (King of Pop).
contrasting
train_16530
For a given system result S, gold standard G and its fuzzy version G * , we propose that precision be computed in the traditional way for the crisp version of the gold standard -P = |T P | -with the intuition that false positives proposed by the system (type I error) be weighted equally: if the system proposes an annotation, it should be correct, independently of the type of annotation.
a gold standard annotation not proposed by the system may be due to different design choices; we hence propose to use a fuzzy recall measure with respect to G * , namely R * = a∈S µ G * (a) a∈G µ G * (a) , thus applying different costs for missing annotations (type II errors) depending on the annotation in question.
contrasting
train_16531
Most Open IE systems extract binary relations using domain-independent syntactic and lexical constraints.
systems specialized in other syntactic constructions were also developed, such as noun-mediated relations (Pal and Mausam, 2016), n-ary relations (Akbik and Löser, 2012), nested propositions (Bhutani et al., 2016) and numerical Open IE (Saha et al., 2017a).
contrasting
train_16532
One strategy for tuple matching would be to enforce an exact match by matching the boundaries of the extracted and benchmark tuples in text.
as noted in earlier works Schneider et al., 2017), this method penalizes different but equally valid arguments, which are resulted from different annotation styles employed by different Open IE systems.
contrasting
train_16533
As each of the unsupervised OpenIE systems has it's own rules to extract different relations, applying only one system might miss other potential relations that can be extracted by other Open IE systems.
since SenseOIE learns from multiple existing Open IE systems, it can extract many different relation types.
contrasting
train_16534
This feature is useful to generate multiple sequences of extractions from a single sentence.
note that this model does not use the results of unsupervised systems as features.
contrasting
train_16535
Crosslingual ED aims to tackle this challenge by transferring knowledge between different languages to boost performance.
previous cross-lingual methods for ED demonstrated a heavy dependency on parallel resources, which might limit their applicability.
contrasting
train_16536
The original GCNs compute a graph convolution vector for w i at (k+1)th layer by: where and b k L(w i ,v) are parameters of the dependency label L(w i , v) in the kth layer.
retaining parameters for every dependency label is space-consuming and compute-intense (there are approximately 50 labels), in our model, we limit L(w i , v) to have only three types of labels 1) an original edge, 2) a self loop edge, and 3) an added inverse edge, as suggested in (Nguyen and Grishman, 2018).
contrasting
train_16537
We discuss five baseline approaches, where the best approach achieves an F1 score of 0.50, significantly outperforming a traditional approach by 79% (0.28).
our best baseline is far from reaching human performance (0.82), indicating our dataset is challenging.
contrasting
train_16538
End-to-end systems Miwa and Bansal, 2016) are a promising solution for addressing error propagation.
a major roadblock for the advancement of this line of research is the lack of benchmark datasets.
contrasting
train_16539
Our best baseline (Baseline 5) significantly outperforms the standard pipeline approach (Baseline 1) in both the text and link evaluation.
the performance of Baseline 5 is well below the human performance.
contrasting
train_16540
Neural Process Networks The most significant prior work on this dataset is the work of .
their data condition differs significantly from ours: they train on a large noisy training set and do not use any of the highquality labeled data, instead treating it as dev and test data.
contrasting
train_16541
Pre-training has proven to be effective in unsupervised machine translation due to its ability to model deep context information in cross-lingual scenarios.
the crosslingual information obtained from shared BPE spaces is inexplicit and limited.
contrasting
train_16542
Previous approaches benefit mostly from crosslingual n-gram embeddings, but recent work proves that cross-lingual language model pretraining could be a more effective way to build initial unsupervised machine translation models (Lample and Conneau, 2019).
in their method, the cross-lingual information is mostly obtained from shared Byte Piece Encoding (BPE) (Sennrich et al., 2016b) spaces during pre-training, which is inexplicit and limited.
contrasting
train_16543
In this way, the model can leverage the cross-lingual information provided by parallel sentences to predict the masked tokens.
for unsupervised machine translation, TLM cannot be used due to the lack of parallel sentences.
contrasting
train_16544
Leveraging much more monolingual data from Wikipedia, their work shows a big potential of pre-training for unsupervised machine translation.
the cross-lingual information is obtained mostly from the shared BPE space during their pre-training method, which is inexplicit and limited.
contrasting
train_16545
Therefore, there exists no analytical solution to maximize it.
since deep neural network is differentiable, we can update θ by taking a gradient ascent step: The resulting algorithm belongs to the class of generalized EM algorithms and is guaranteed (for a sufficiently small learning rate η) to converge to a (local) optimum of the data log likelihood (Wu, 1983).
contrasting
train_16546
LaSyn is able to generate diverse translations that reflect the sentence structure implied by the input POS tags.
in trying to fit the translation into the specified sequence, it deviates somewhat from the ideal translation.
contrasting
train_16547
We find that the translation performance of SEARCH rises with the increase of monolingual corpus size in the beginning.
further enlarging the monolingual corpus hurts the translation performance.
contrasting
train_16548
However, further enlarging the monolingual corpus hurts the translation performance.
our approach can still obtain further improvements when adding more synthetic bilingual sentence pairs.
contrasting
train_16549
ReLU or similar non linearity functions work well with single neurons.
we find that this squashing function works best with capsules.
contrasting
train_16550
Recently many embedding-based approaches were proposed for cross-lingual entity alignment.
almost all of them are based on TransE or its variants, which have been demonstrated by many studies to be unsuitable for encoding multi-mapping relations such as 1-N, N-1 and N-N relations, thus these methods obtain low alignment precision.
contrasting
train_16551
Multilingual KGs such as BabelNet (Navigli and Ponzetto, 2012), YAGO3 (Mahdisoltani et al., 2013) and DBpedia (Lehmann et al., 2015) play essential roles in many cross-lingual applications.
for multilingual KGs, each languagespecific part is constructed by different parties with different data sources.
contrasting
train_16552
Since such an obviously false triple is "too easy" and can easily obtain low plausibility during training.
a negative triple like (David Hilbert, nationality, F rance) is more valuable, since F rance has similar semantics with Germany, but F rance can't replace Germany.
contrasting
train_16553
But when neg is small, the uniform sampling method can't generate enough valuable samples, which seriously hurts the final performance.
our sampling strategy can pay more attention to valuable samples, generating many valuable negative samples even when the amount of total samples is small.
contrasting
train_16554
The framework presented in this work, is a general framework that can accommodate other mappings and improvements to the distance functions.
so far to the best of our knowledge, structure preserving mapping such GP outperforms non-linear mappings that change the structure of the underlying space.
contrasting
train_16555
The underlying assumption of these approaches is that in-domain and out-ofdomain NMT models share the same parameter space or prior distributions, and the useful out-ofdomain translation knowledge can be completely transferred to in-domain NMT model in a onepass manner.
it is difficult to achieve this goal due to domain differences.
contrasting
train_16556
More specifically, at the k-th iteration, we first transfer the translation knowledge of the previous in-domain NMT model θ out trained on D out (Line 6), and then reversely transfer the translation knowledge encoded by θ Obviously, during the above procedure, one of important steps is how to transfer the translation knowledge from one domain-specific NMT model to the other one.
if we directly employ conventional domain transfer approaches, such as fine-tuning, as the iterative dual domain adaptation proceeds, the previously learned translation knowledge tends to be ignored.
contrasting
train_16557
The underlying reason is that these multi-domain models discriminate domain-specific and domain-shared information in encoder, however, their shared decoder are inadequate to effectively preserve domain-related text style and idioms.
our framework is adept at preserving these information since we construct an individual NMT model for each domain.
contrasting
train_16558
Extending the study to the multi-agent scenario is feasible and desirable, as multiple agents might supply more reliable and diverse advantages compared to the two-agents scenario.
the agent is expected to learn advantages from each other in the multi-agent scenario, which results in a complex many-to-many learning problem( Figure 1.(b)).
contrasting
train_16559
With the above formula, the agent is optimized to the minimization of the model divergence between its own model and the ensemble model.
integrating the above regularization term into the training objective straightforwardly is problematic in practice.
contrasting
train_16560
Plain pivot-based transfer outperforms the synthetic data baseline by up to +1.9% BLEU or -3.3% TER.
the pivot adapter or cross-lingual encoder gives marginal or inconsistent improvements over the plain transfer.
contrasting
train_16561
Modern sentence-level NMT systems often produce plausible translations of isolated sentences.
when put in context, these translations may end up being inconsistent with each other.
contrasting
train_16562
Current state-of-the-art neural machine translation (NMT) uses a deep multi-head selfattention network with no explicit phrase information.
prior work on statistical machine translation has shown that extending the basic translation unit from words to phrases has produced substantial improvements, suggesting the possibility of improving NMT performance from explicit modeling of phrases.
contrasting
train_16563
Existing methods mainly focus on developing novel network architectures so as to stabilize gradient back-propagation, such as the fast-forward connection (Zhou et al., 2016), the linear associative unit (Wang et al., 2017), or gated recurrent network variants (Hochreiter and Schmidhuber, 1997;Gers and Schmidhuber, 2001;Cho et al., 2014;Di Gangi and Federico, 2018).
to the above recurrent network based NMT models, recent work focuses on feed-forward alternatives with more smooth gradient flow, such as convolutional networks (Gehring et al., 2017) and selfattention networks (Vaswani et al., 2017).
contrasting
train_16564
Neural-network-based models for Machine Translation (MT) have set new standards for performance, especially when large amounts of parallel text (bitext) are available.
explicit word-to-word alignments, which were foundational to pre-neural statistical MT (SMT) (Brown et al., 1993), have largely been lost in neural MT (NMT) models.
contrasting
train_16565
In §4 we established that our discriminativelytrained neural aligner outperforms unsupervised baselines, especially on NER spans; in §5 we verified that the alignments it produces can be productively applied a downstream IE task (NER) via dataset projection.
unlike these unsupervised baselines, our aligner requires labelled data on the order of thousands of sentences, and thus cannot be applied to language pairs for which no labelled alignment data exists (most languages).
contrasting
train_16566
These studies mainly use capsule network for information aggregation, where the capsules could have a less interpretable meaning.
our model learns what we expect by the aid of auxiliary learning signals, which endows our model with better interpretability.
contrasting
train_16567
Popović (2011b) further develop this algorithm into an open-source tool and demonstrates that the detected errors helps to build better evaluation metrics.
the edit distance algorithm is not robust (c. f. Sec.
contrasting
train_16568
It is because wrong translation is also likely to get aligned with the source as it can be the translation of the source words in some other contexts.
reference gives us a pointer to the correct word choice in the given context, hence helping determining W label.
contrasting
train_16569
Neural machine translation (NMT) has achieved the state-of-the-art results on a mass of language pairs with varying structural differences, such as English-French (Bahdanau et al., 2014;Vaswani et al., 2017) and Chinese-English (Hassan et al., 2018).
so far not much is known about how and why NMT works, which pose great challenges for debugging NMT models and designing optimal architectures.
contrasting
train_16570
Punctuation in NMT is understudied since it carries little information and often does not affect the understanding of a sentence.
we find that punctuation is important on English⇒Japanese translation, whose proportion increases dramatically.
contrasting
train_16571
In particular, we observe that 2;IMP;PL;Vthe category difficult for Polish-Czech BLI is also among the most challenging for Polish-Spanish.
one of the highest performing categories for Polish-Czech, 3;MASC;PL;PST;V, yields much worse accuracy for Polish-Spanish.
contrasting
train_16572
Without the warm-up period and the additional copying tasks, our model's dev accuracy is worse than any of the baselines.
our two-step attention trained with the additional copying task already improves over most baselines without any of the additional biases, with a dev accuracy of 48.
contrasting
train_16573
Previous work aims to build a well-formed tree (Tiedemann and Agić, 2016) from source dependencies, solving word alignment conflicts by heuristic rules.
we use partial translation instead to avoid unnecessary noise.
contrasting
train_16574
Recently, transition-based top-down parsing with Pointer Networks (Vinyals et al., 2015) has attained state-of-the-art results in both dependency and discourse parsing tasks with the same computational efficiency (Ma et al., 2018;Lin et al., 2019); thanks to the encoder-decoder architecture that makes it possible to capture information from the whole text and the previously derived subtrees, while limiting the number of parsing steps to linear.
the decoder of these parsers has a sequential structure, which may not yield the most appropriate inductive bias for deriving a hierarchical structure.
contrasting
train_16575
Thus it contains information about its children.
in our model we consider the decoder state when the sibling was first generated from its parent.
contrasting
train_16576
Recent sequence labeling models achieve state-of-the-art performance by combining both character-level and word-level information (Chiu and Nichols, 2016;Ma and Hovy, 2016;Lample et al., 2016).
these models heavily rely on large-scale annotated training data, which may not be available in most languages.
contrasting
train_16577
It is worth mentioning that, in previous work and this work, the corpora used in the experiments are limited to the source and the target language.
the multilingual BERT is jointly learned on Wikipedia of 102 languages and may benefit from a multi-hop transfer.
contrasting
train_16578
Wikipedia contains multilingual articles for various topics and can thus be used to generate parallel/comparable corpora or even weakly annotated target language sentences (Kim et al., 2012).
parallel corpora and Wikipedia can be rare for true low-resource languages.
contrasting
train_16579
Some of the previous work also proposes sequence labeling models with shared parameters between languages for performing cross-lingual knowledge transfer (Lin et al., 2018;Cotterell and Duh, 2017;Yang et al., 2017;Ammar et al., 2016;Kim et al., 2017).
these models are usually obtained through joint learning and require annotated data from the target language.
contrasting
train_16580
Recurrent neural networks (RNN) used for Chinese named entity recognition (NER) that sequentially track character and word information have achieved great success.
the characteristic of chain structure and the lack of global semantics determine that RNN-based models are vulnerable to word ambiguities.
contrasting
train_16581
and then apply word sequence labeling (Yang et al., 2016;He and Sun, 2017).
the rare gold-standard segmentation in NER datasets and incorrectly segmented entity boundaries both negatively impact the identification of named entities (Peng and Dredze, 2015;He and Sun, 2016).
contrasting
train_16582
In particular, Zhang and Yang (2018) introduced a variant of a long short-term memory network (latticestructured LSTM) that encodes all potential words matching a sentence to exploit explicit word information, achieving state-of-the-art results.
these methods are usually based on RNN or CRF to sequentially encode a sentence, while the underlying structure of language is not strictly sequential (Shen et al., 2019).
contrasting
train_16583
Especially, Zhang and Yang (2018) proposed a lattice LSTM to model characters and potential words simultaneously.
their lattice LSTM used a concatenation of independently trained left-toright and right-to-left LSTM to represent features, which was also limited (Devlin et al., 2018).
contrasting
train_16584
The F1 score of LGN decreases by 3.59% on average on the four datasets without CRF.
the lattice LSTM decreases by 6.24%.
contrasting
train_16585
However, the lattice accuracy decreases significantly as the sentence length increases.
the LGN not only gives higher results over short sentences, but also shows its effectiveness and robustness when the sentence length is more than 80 characters.
contrasting
train_16586
graph composition states are updated for only one step.
it gives an incorrect class of the entity "印度河(The Indus River)", which is a location entity but not a GPE (Geo-Political Entity).
contrasting
train_16587
Moreover, the capsule network with dynamic routing algorithms (Zhang et al., 2018a) is proposed to perform interactions in both directions.
there are still two limitations in this model.
contrasting
train_16588
Considering the requirement of the annotated parse trees and the costly annotation effort, most prior work relied on the supervised syntactic parser.
a supervised parser may be unavailable when the language is low-resourced or the target data has different distribution from the source domain.
contrasting
train_16589
By top-down greedy parsing (Shen et al., 2018a), which recursively splits the sentence into two constituents with minimum a, a parse tree can be formed.
because each layer has a set of a l , we have to decide to use which layer for parsing.
contrasting
train_16590
It is worth mentioning that we have tried to initialize our Transformer model with pre-trained BERT, and then fine-tuning on WSJ-train.
in this setting, even when the training loss becomes lower than the loss of training from scratch, the parsing result is still far from our best results.
contrasting
train_16591
The basic factorized model got it wrong, assigning A1 to the argument 'state'.
taking into account other arguments, the model can correct the label.
contrasting
train_16592
Probabilistic methods for aggregating crowdsourced data have been shown to be more accurate than simple heuristics such as majority vot-ing (Raykar et al., 2010;Sheshadri and Lease, 2013;Rodrigues et al., 2013;Hovy et al., 2013).
existing methods for aggregating sequence labels cannot model dependencies between the annotators' labels (Rodrigues et al., 2014;Nguyen et al., 2017) and hence do not account for their effect on annotator noise and bias.
contrasting
train_16593
Like spam, CM can model spammers who frequently chose one label regardless of the ground truth, but also models different error rates and biases for each class.
cM ignores dependencies between annotations in a sequence, such as the fact that an 'I' cannot immediately follow an 'O'.
contrasting
train_16594
The choice of annotator model for a particular annotator depends on the developer's understanding of the annotation task: if the annotations have sequential dependencies, this suggests the seq model; for non-sequential classifications CM may be effective with small (≤ 5) numbers of classes; spam may be more suitable if there are many classes, as the number of parameters to learn is low.
there is also a trade-off between the expressiveness of the model and the number of parameters that must be learned.
contrasting
train_16595
This is a strong assumption when considering that the annotators have to make their decisions based on the same input data.
in practice, dependencies do not usually cause the most probable label to change (Zhang, 2004), hence the performance of classifier combination methods is only slightly degraded, while avoiding the complexity of modelling dependencies between annotators (Kim and Ghahramani, 2012).
contrasting
train_16596
In this way, an efficient inference algorithm such as maximum directed spanning tree algorithm (Chu and Liu, 1965) can be used.
with the corpus-wise constraints, directly solving Eq.
contrasting
train_16597
Computing devices have recently become capable of interacting with their end users via natural language.
they can only operate within a limited "supported" domain of discourse and fail drastically when faced with an out-of-domain utterance, mainly due to the limitations of their semantic parser.
contrasting
train_16598
Current commercial conversational agents such as Siri, Alexa or Google Assistant come with a fixed set of simple functions like setting alarms and making reminders, but are often not able to cater to the specific phrasing of a user or the specific action a ⇤ These two authors contributed equally user needs.
it has recently been shown that it is possible to add new functionalities to an agent through natural language instruction (Azaria et al., 2016;Labutov et al., 2018).
contrasting
train_16599
On one hand, this suggests that the discriminator has room for improvement.
the numbers suggest that there is still a much larger room for improvement for the encoder, aligner, and look-up components.
contrasting