id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_15700
Since both variables are used to reconstruct the word, but only the y variable is trained to predict the tag, it appears that z is capturing other information useful for reconstructing the word.
since they are both used for reconstruction, the two spaces show signs of alignment; that is, the "tag" latent variable y does not show as clean a separation into tag clusters as the y variable in the VSL-GG-Hier model in Figure 3e.
contrasting
train_15701
These methods focused on mono-lingual settings.
for cross-lingual tasks (e.g., cross-lingual entity linking), these approaches need to introduce additional tools to do translation, which suffers from extra costs and inevitable errors (Ji et al., , 2016.
contrasting
train_15702
Thus, monolingual structured knowledge of entities are not only extended to cross-lingual settings, but also augmented from other languages.
we utilize distant supervision to generate comparable sentences for cross-lingual sentence regularizer to model co-occurrence information across languages.
contrasting
train_15703
Moens (2015, 2016) collect comparable documents on same themes from multi-lingual Wikipedia, shuffle and merge them to build "pseudo bilingual documents" as training corpora.
the quality of "pseudo bilingual documents" are difficult to control, resulting in poor performance in several cross-lingual tasks (Vulic and Moens, 2016).
contrasting
train_15704
On one hand, we utilize their shared semantics to align similar words and entities with similar embedding vectors, no matter they are in the same language or not.
crosslingual embeddings will benefit from different languages due to the complementary knowledge.
contrasting
train_15705
Conventional knowledge representation methods normally regard cross-lingual links as a special equivalence type of relation between two entities (Zhu et al., 2017).
we argue that this may mislead to an inconsistent training ob-multiple relations.
contrasting
train_15706
Another approach in (Han et al., 2016;Toutanova et al., 2015;Wu et al., 2016) learns to represent entities based on their textual descriptions together with the structured relations.
these methods only focus on mono-lingual settings, and few researches have been done in cross-lingual scenarios.
contrasting
train_15707
Conventional knowledge representation methods normally regard cross-lingual links as a special equivalence type of relation between two entities (Zhu et al., 2017).
we argue that this may mislead to an inconsistent training ob-will be no direct relation between Chinese entity 福 斯特 (Foust) and English entity Piston by merely adding the equivalence relation between Foust and 福斯特, which is in contradiction with the fact tha Foust belongs to Piston, no matter in which lan guage.
contrasting
train_15708
In this paper we consider both options, taking advantage of the second in order to feed the model with BEs.
to standard LSTM-based language Figure 1: The AE-SCL and AE-SCL-SR models (figure imported from ZR17).
contrasting
train_15709
We focus on learning linear mappings to construct the common semantic space and adopt correlational neural networks (CorrNet) (Chandar et al., 2016;Rajendran et al., 2015) as the basic model.
to previous work which only exploited monolingual word semantics, we introduce multiple cluster-level alignments and design a new cluster consistent CorrNet to align both words and clusters.
contrasting
train_15710
Furthermore, recent work proposes approaches to obtain unsupervised BWEs without relying on any bilingual resources (Zhang et al., 2017;Lample et al., 2018b).
to BWEs that only focus on a pair of languages, MWEs instead strive to leverage the interdependencies among multiple languages to learn a multilingual embedding space.
contrasting
train_15711
In the crosslingual setting, it has been successfully applied to unsupervised cross-lingual text classification (Chen et al., 2016) and unsupervised bilingual word embedding learning (Zhang et al., 2017;Lample et al., 2018b).
these methods only consider one pair of languages at a time, and do not fully exploit the cross-lingual relations in the multilingual setting.
contrasting
train_15712
In CLWS though, one can still achieve relatively high correlation in spite of minor inaccuracies.
an encouraging result is that when compared to the state-of-the-art supervised results, our MAT+MPSR method outperforms NASARI by a very large margin, and achieves top-notch overall performance similar to the competition winner, Luminoso, without using any bitexts.
contrasting
train_15713
2017leveraged cross-lingual signals in more than two languages.
they either used pretrained embeddings or learned only for the English side, which is undesirable since cross-lingual embeddings shall be jointly learned such that they aligned well in the embedding space.
contrasting
train_15714
(2012) presented SCWS, the first and only dataset that contains word pairs and their sentential contexts for measuring the quality of sense embeddings.
it is a monolingual dataset constructed in English, so it cannot evaluate cross-lingual semantic word similarity.
contrasting
train_15715
This procedure, called post-specialization, effectively propagates the information stored in the external constraints to the entire word vector space.
this mapping should not just model the inherent transformation, but also ensure that the resulting vector is 'natural'.
contrasting
train_15716
We observe only modest and inconsistent gains over ATTRACT-REPEL and POST-DFFN in the FULL setting.
the explanation of this finding is straightforward: 99.2% of SimLex words and 99.9% of SimVerb words are present in the external constraints, making this an unrealistic evaluation scenario.
contrasting
train_15717
Because of this, monolingual embedding spaces are not isomorphic Kementchedjhieva et al., 2018).
simply dropping the orthogonality constraints leads to overfitting, and is thus not effective in practice.
contrasting
train_15718
Some common signals to learn bilingual embeddings come from parallel (Hermann and Blunsom, 2014;Luong et al., 2015;Levy et al., 2017) or comparable corpora (Vulić and Moens, 2015a;Søgaard et al., 2015;Vulić and Moens, 2016), or lexical resources such as WordNet, ConceptNet or BabelNet (Speer et al., 2017;Mrksic et al., 2017;Goikoetxea et al., 2018).
these sources of supervision may be scarce, limited to certain domains or may not be directly available for certain language pairs.
contrasting
train_15719
Furthermore, one may wonder whether the initial alignment is actually needed, since e.g., Coates and Bollegala (2018) obtained high-quality meta-embeddings without such an alignment set.
when applying our approach directly to the initial monolingual non-aligned embedding spaces, we obtained results which were competitive but slightly below the two considered alignment strategies.
contrasting
train_15720
In Italian our proposed model shows an improvement across all configurations.
in Spanish VecMap emerges as a highly competitive baseline, with our model only showing an improved performance when training data in this language abounds (in this specific case there is an increase from 17.2 to 19.5 points in the MRR metric).
contrasting
train_15721
Answering this directly would require annotators with domain expertise and is infeasible in practice.
we can use our crowdsourced annotation to answer a restricted variant of this question: given a sentence s and an insertable phrase p, do humans agree on where p belongs in s?
contrasting
train_15722
A key challenge in cross-lingual NLP is developing general language-independent architectures that are equally applicable to any language.
this ambition is largely hampered by the variation in structural and semantic properties, i.e.
contrasting
train_15723
Deep learning has allowed NLP algorithms to dispose of manually-crafted features, and to virtually achieve language independence.
their performance still varies noticeably across languages due to different underlying data distributions (Bender, 2013;O'Horan et al., 2016).
contrasting
train_15724
The rationale behind fixing the set V is a) to make the language model more robust to handling OOVs and to effectively bypass the problem of unreliable word estimates for low-frequency and unseen words (by ignoring them), and b) to enable direct comparisons of absolute perplexity scores across different models.
this posits a critical challenge as cross-linguistic evaluation becomes uneven.
contrasting
train_15725
This setup is commonly referred to as the open-vocabulary setup.
two distinct approaches with crucial modeling differences are referred to by the same term in the literature.
contrasting
train_15726
Unseen test words are mapped to one <UNK> vector, sampled from the the space of trained word vectors relying on a normal distribution and the same fixed random seed for all models.
kN5 by design has a slightly different way of handling unseen test words: they are regarded as outliers and assigned low-probability estimates.
contrasting
train_15727
We have also observed that injecting character information into word representations is always beneficial because this mitigates the above-mentioned sparsity issues.
the extent of the gain in perplexity partly depends on some typological properties that regulate the ambiguity of the mapping between morphemes (here modeled as character n-grams) and their meaning.
contrasting
train_15728
Others have focused on token-level language ID; some work is constrained to predicting word-level labels from a single language pair (Nguyen and Dogruöz, 2013;Solorio et al., 2014;Molina et al., 2016a;Sristy et al., 2017), while others permit a handful of languages (Das and Gambäck, 2014;Sristy et al., 2017;Rijhwani et al., 2017).
cMX supports 100 languages.
contrasting
train_15729
Models like Skip-gram, CBOW (Mikolov et al., 2013a) and GloVe (Pennington et al., 2014) are immensely popular and achieve remarkable performance in many NLP tasks.
most WRL methods learn distributional information of words from large corpora while the valuable information contained in semantic lexicons are disregarded.
contrasting
train_15730
Another way of performing language independent transfer resorts to multi-task learning, where a model is trained jointly across different languages by sharing parameters to allow for knowledge transfer (Ammar et al., 2016a;Cotterell and Duh, 2017;Lin et al., 2018).
such approaches usually require some amounts of training data in the target language for bootstrapping, which is different from our unsupervised approach that requires no labeled resources in the target language.
contrasting
train_15731
Beam search is a widely used approximate search strategy for neural network decoders, and it generally outperforms simple greedy decoding on tasks like machine translation.
this improvement comes at substantial computational cost.
contrasting
train_15732
These decoders generate tokens from left to right, at each step giving a distribution over possible next tokens, conditioned on both the input and all the tokens generated so far.
since the space of all possible output sequences is infinite and grows exponentially with sequence length, heuristic search methods such as greedy decod-ing or beam search (Graves, 2012;Boulanger-Lewandowski et al., 2013) must be used at decoding time to select high-probability output sequences.
contrasting
train_15733
Neural machine translation (NMT) based on the encoder-decoder architecture becomes the new state-of-the-art due to distributed representation and end-to-end learning (Cho et al., 2014;Bahdanau et al., 2015;Junczys-Dowmunt et al., 2016;Gehring et al., 2017;Vaswani et al., 2017).
the current NMT is a global model that maximizes the performance on the overall data and has problems in handling low-frequency words and ambiguous words 1 , we refer these words as troublesome words and define them in Section 3.1.
contrasting
train_15734
Therefore, we have to leverage external bilingual data sources to alleviate the problem.
the external bilingual data are usually not in the same domain as the utterance, and hence they are not aligned with the slot-value pair and system acts (i.e., (c e s , c e v , a e t ) or (c f s , c f v , a f t )).
contrasting
train_15735
From the table Table 3, we can observe that the α ablation (τ fixed to 0.1) τ ablation (α fixed to 1) experimental results are not very sensitive to α, a dramatic change of α will not harm the final results too much, we simply choose α = 1 as the hyper-parameter.
the system is more sensitive to temperature.
contrasting
train_15736
The language embeddings need to represent all of the languagespecific information and thus may need to be large in size.
when computing the parameters of each group, only a small part of that information is relevant.
contrasting
train_15737
Low-Resource: Similar to the supervised experiments except that we limit the size of the parallel corpora used in training.
for GML and CPG the full monolingual corpus is used for auto-encoding training.
contrasting
train_15738
This means that, in the previous example, we would first translate from German to English and then from English to French (using two pairwise models for a single translation).
pivoting is prone to error propagation incurred when chaining multiple imperfect translations.
contrasting
train_15739
Our resulting system, which outperforms other state-of-the-art systems, uses a standard pairwise encoder-decoder architecture.
it differs from earlier approaches by incorporating a component that generates the parameters to be used by the encoder and the decoder for the current sentence, based on the source and target languages, respectively.
contrasting
train_15740
One can see that all systems perform similarly in the beginning and converge after observing increasingly more training instances.
the model with the ratio of (1:10) synthetic data gets increasingly biased towards the noisy data after 1M instances.
contrasting
train_15741
Above we examined the mean of prediction loss for each token over all occurrences, in order to identify difficult-to-predict tokens.
the uncertainty of the model in predicting a difficult Input: Difficult tokens and the corresponding sentences in the bitext D " tyt, Yy t " ry1, .
contrasting
train_15742
As expected using random sampling for backtranslation improves the translation quality over the baseline.
all targeted sampling variants in turn outperform random sampling.
contrasting
train_15743
This is inherently useful since it allows for better learning of less frequent words.
a side effect of this approach is that at times the model generates subword units that are not linked to any words in the source sentence.
contrasting
train_15744
In recent years, neural machine translation (NMT) has achieved great advancement (Nal and Phil, 2013;Sutskever et al., 2014;Bahdanau et al., 2015).
two difficulties are encountered in the practical applications of NMT.
contrasting
train_15745
In this way, the domain-shared translation knowledge can be fully exploited.
the translated sentences often belong to multiple domains, thus requiring a NMT model general to different domains.
contrasting
train_15746
It should be noted that our utilization of domain classifiers is similar to adversarial training used in (Pryzant et al., 2017) which injects domain-shared contexts into annotations.
by contrast, we introduce domain classifier and adversarial domain classifier simultaneously to distinguish different kinds of contexts for NMT more explicitly.
contrasting
train_15747
datasets of paired sentences in both the source and target language.
bitext is limited and there is a much larger amount of monolingual data available.
contrasting
train_15748
This is likely because synthetic beam and greedy data does not provide as much training signal as the bitext which has more variation and is harder to fit.
sampling and beam+noise require no upsampling of the bitext, which is likely because the synthetic data is already hard enough to fit and thus provides a strong training signal ( §5.2).
contrasting
train_15749
(Sproat et al., 2006;Chang et al., 2009) is considerably easier than generation, owing to the smaller search space.
discovery often uses features derived from resources that are unavailable for low-resource languages, like comparable corpora (Sproat et al., 2006;Klementiev and Roth, 2008).
contrasting
train_15750
Inducing multilingual word embeddings by learning a linear map between embedding spaces of different languages achieves remarkable accuracy on related languages.
accuracy drops substantially when translating between distant languages.
contrasting
train_15751
Fixed Decoding Depth {2,3,4,5}-pass decoders perform the left-to-right decoding by the multipass decoder with a fixed number of decoding passes.
to the related machine translation systems, our fixed number-pass decoder significantly outperforms Moses and RNNSearch by 7.53 and 1.05 BLEU points at least, as Table 2 presents.
contrasting
train_15752
Specifically, the adaptive multi-pass decoder outperforms the multi-pass decoder with a fixed decoding depth by 0.69, 0.71, 0.68 and 0.45 BLEU scores on NIST03, NIST04, NIST05 and NIST06 datasets at most.
to the Moses, RNNSearch, Deliberation Network and ABDNMT, the adaptive multi-pass decoder has the corresponding improvement about 8.03, 1.55, 0.74 and 0.34 BLEU points, respectively.
contrasting
train_15753
In training phrase, we spend more time training the multipass decoder than RNNSearch, Deliberation Network and ABDNMT.
in testing phrase, as illustrated in Table 2, our adaptive multi-pass decoder spends about 180s completing the entire testing procedure, in comparison with the corresponding 87s, 162s, 132s of RNNSearch, Deliberation Network and ABDNMT, due to the auxiliary policy network.
contrasting
train_15754
While conventional machine comprehension models were given a paragraph that always contains an answer to a question, some researchers have extended the models to an open-domain setting where relevant documents have to be searched from an extremely large knowledge source such as Wikipedia (Chen et al., 2017;Wang et al., 2017a).
most of the open-domain QA pipelines depend on traditional information retrieval systems which use TF-IDF rankings (Chen et al., 2017;Wang et al., 2017b).
contrasting
train_15755
Despite the efficiency of the traditional retrieval systems, the documents retrieved and ranked at the top by such systems often do not contain answers to questions.
simply increasing the number of top ranked documents to find answers also increases the number of irrelevant documents.
contrasting
train_15756
Estimating θ a and θ g is straight-forward by using the cross-entropy objective J 1 ({θ a , θ g }) and the backpropagation algorithm.
selecting text regions in the Context Zoom layer makes it difficult to estimate θ z given their discrete nature.
contrasting
train_15757
The cosine similarity of the centroids of the two versions of Hyperwords is -0.006, and the cosine similarity for Hyperwords-SVD and GloVe is 0.019.
poor initialization as a result of applying the identity transform to very distant word embeddings is not the explanation for the poor performance of MUSE in this set-up: Both sets of Hyperwords embeddings were normalized, but alignment still failed.
contrasting
train_15758
In all the word pairs, average word frequency is 13934.4.
it is only 1676.1 in the top 0.1% word pairs, it is 3984.8 in the top 1%, and it is 7904.9 in the top 10%.
contrasting
train_15759
Word embeddings have been an essential part of neural-network based approaches for natural language processing tasks (Goldberg, 2016).
many popular word embeddings techniques have a fixed vocabulary (Mikolov et al., 2013;Pennington et al., 2014), i.e., they can only provide vectors over a finite set of common words that appear frequently in a given corpus.
contrasting
train_15760
To compensate for the absence of direct supervision, work in crosslingual learning and distant supervision has discovered creative use for a number of alternative data sources to learn feasible models: -aligned parallel corpora to project POS annotations to target languages (Yarowsky et al., 2001;Agić et al., 2015;Fang and Cohn, 2016), -noisy tag dictionaries for type-level approximation of full supervision (Li et al., 2012), -combination of projection and type constraints (Das and Petrov, 2011;Täckström et al., 2013), -rapid annotation of seed training data .
only one or two compatible sources of distant supervision are typically employed.
contrasting
train_15761
Europarl covers 21 languages of the EU with 400k-2M sentence pairs, while WTC spans 300+ widely diverse languages with only 10-100k pairs, in effect sacrificing depth for breadth, and introducing a more radical domain shift.
as our results show little projected data turns out to be the most beneficial, reinforcing breadth for depth.
contrasting
train_15762
All data sources employed in our experiment are very high-coverage.
for true low-resource languages, we cannot safely assume the availability of all disparate information sources.
contrasting
train_15763
(ii) Unlike the adversarial methods, our accuracy generally mirrors the model's loss.
the various losses of the adversarial approaches do not well reflect translation accuracy, making model selection or early stopping a challenge in itself.
contrasting
train_15764
Another idea is to replace the softmax in soft attention with sparsity inducing operators (Martins and Astudillo, 2016; Niculae and Blondel, 2017).
all sparse/local attention methods continue to compute P (y) from an attention weighted sum of inputs (Eq: 2) unlike hard attention.
contrasting
train_15765
Our beam-joint also is a mixture of softmax and possibly achieves higher rank than a single softmax.
their mixture requires learning multiple softmax matrices, whereas ours are due to varying attention and we do not learn any extra parameters than soft attention.
contrasting
train_15766
For example, pessimism and negative attitudes impact negatively one's mental health, can induce suicidal thoughts, and affect negatively not only the person in question, but also their family and friends (Peterson and Bossio, 2001;Achat et al., 2000;Scheier et al., 2001).
optimism reduces stress and promotes better physical health and overall well-being (Carver et al., 2010).
contrasting
train_15767
Our main focus in this paper is on MTL as a framework to explore the lexical, structural and topical knowledge involved in users' selection of headlines.
recognizing a popular headline and giving advice on how to write one are not the same: we want to provide editors and journalists with insights as to what constructions are likely to attract more eyeballs.
contrasting
train_15768
Menini and Tonelli (2016) develop a SVM classifier to detect disagreement, relying on three aspects including sentiment-based, semantic and surface features extracted from both whole text and topic-related part.
the performances of all these models highly depend on the quality of hand-crafted features.
contrasting
train_15769
in response, which indicates a rhetorical mood to show disagreement.
even though why doesn't he answer in response is endowed less weight from the self attention, the cross attention highlights it and god in quote.
contrasting
train_15770
We can only retrofit the embeddings of authors in the training set D train , since we need information about the class label in order to construct Ω.
the retrofitting process changes the configuration of the embedding space (intoD train ), so a separating hyperplane learned onD train will not be applicable to a test set D test in the original embedding space.
contrasting
train_15771
Research has identified a variety of linguistic features, ranging from "stylistic features with n-grams models, parts-of-speech, collocations, LDA, different readability indexes, vocabulary richness, correctness or verbosity" (Rangel et al., 2016).
none of these papers used demographic information directly in the author representations.
contrasting
train_15772
(1995) proposed a semisupervised framework active learning, which can minimize the need for human annotation in a certain extent.
these approaches are still mostly or selectively human-labeled, and may be distracted by the disadvantages raised above.
contrasting
train_15773
To train the GAN, we used the distributed sentence representations computed from the pretrained sentence embeddings instead of symbolic sentences.
we think the limitation of the pre-trained sentence embeddings can be overcome by building a GAN that generates symbolic sentences and discriminates them.
contrasting
train_15774
It is noteworthy that AL is not an "once-for-all" deal.
it needs to be carried out repeatedly and iteratively, until a predefined condition is met, such as the time at which the classification performance remains almost the same within We cooperate AL with Exp2Imp transformation.
contrasting
train_15775
• This makes it possible to learn the subtle differences among argument pairs which belong to different relation classes.
the instances listed below are far from informative.
contrasting
train_15776
Neural networks based models like Seq2Seq architecture (Vinyals and Le, 2015;Shang et al., 2015) are proven to be effective to generate valid responses for a dialogue system.
as revealed in many previous works (Li et al., 2016a;Wu et al., 2018), "safe reply" is still an open problem and lots of efforts are made to generate more informative responses (Li et al., 2016a;Mou et al., 2016;Li et al., 2016b;Qiu et al., 2017;He et al., 2017;.
contrasting
train_15777
Note that the size of training set and vocabulary used in our experiments are relatively small compared to millions of qa-pairs used in other works Wu et al., 2018), so it's reasonable that bad cases sometimes occur in results of baselines.
our models, no matter the static one or the dynamic one, could generate amazing responses which are not only grammatical and informative, but also have some emotional expressions like the use of punctuation and repetition.
contrasting
train_15778
Recently, Reinforcement Learning (RL) approaches have demonstrated advanced performance in image captioning by directly optimizing the metric used for testing.
this shaped reward introduces learning biases, which reduces the readability of generated text.
contrasting
train_15779
Actor-critic (Sutton and Barto, 1998) methods are often adopted, which involves training an additional value network to predict the expected reward.
(Rennie et al., 2017) designed a self-critical method that utilizes the output of its own test-time inference algorithm as the baseline to normalize the rewards, which leads to further performance gains.
contrasting
train_15780
The WordNet graph has edges of various types, with the main types being hypernymy and meronymy to connect nodes containing senses.
we do not use these types, and consider an edge as an undirected semantic or lexical relation between two synsets.
contrasting
train_15781
For example, consider the following connected nodes: The PPR vectors of suit and dress have some weight on tailor, which is desirable.
the PPR vector of law will also have a non-zero weight for tailor.
contrasting
train_15782
In this paper we use the same principle and reward n-grams that are found in the source document during the AMRto-Text generation process.
we use a simpler approach using a probabilistic language model in the scoring mechanism.
contrasting
train_15783
We hypothesize this can be attributed to how the AMR dataset is annotated as there might be discrepancies in different annotator's choices of AMR concepts and relations for sentences with similar wording.
the AMR parsers introduce errors, but they are consistent in their choices of AMR concepts and relations.
contrasting
train_15784
For that, the earliest Document Understand Conference (DUC) (NIST, 2011) benchmarks, in 2001 and 2002, defined several target summary lengths and evaluated each summary against (manually written) reference summaries of the same length.
due to the high cost incurred, subsequent DUC and TAC (NIST, 2018) benchmarks (2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014), as well as the more recently popular datasets CNN/Daily Mail (Nallapati et al., 2016) and Gigaword (Graff et al., 2003), included references and evaluation for just one summary length per input text.
contrasting
train_15785
EXTRACT also outperforms previously published extractive models (i.e., SummaRuNNer, EXTRACT-CNN, and REFRESH).
note that SummaRuNNer generates anonymized summaries (Nallapati et al., 2017) while our models generate non-anonymized ones, and therefore the results of EXTRACT and SummaRuNNer are not strictly comparable (also note that LEAD3 results are different in Table 1).
contrasting
train_15786
Among the aforementioned related studies, a few proposed systems explicitly targeted at generating abstractive summaries for documents.
these systems highly rely on the attention mechanism and/or copying mechanism that heavily depends on different part of input during the decoding stage.
contrasting
train_15787
Short text categorization is widely studied since the recent explosive growth of online social networking applications (Song et al., 2014).
with documents, short texts are less topic-focused in texts.
contrasting
train_15788
Major attempts to tackle the problem is to expand short texts with knowledge extracted from the textual corpus, machine-readable dictionaries, and thesauri (Phan et al., 2008;Wang et al., 2008;Chen et al., 2011;Wu et al., 2012).
because of domain-independent nature of dictionaries and thesauri, it is often the case that the data distribution of the external knowledge is different from the test data collected from some specific domain, which deteriorates the overall performance of categorization.
contrasting
train_15789
This approach uses document labels to influence the selection of anchor words, which in turn affects the resulting topics.
supervised Anchors requires a downstream classifier to be trained using topics as features.
contrasting
train_15790
Our algorithm, called Labeled Anchors, also augments the vector-space representation to include the L document labels.
we do not directly modifyQ.
contrasting
train_15791
Therefore, the process of building a classifier scales linearly with the number of documents and can be time consuming compared to topic recovery.
the formulation of Labeled Anchors allows us to construct a classifier with no additional training.
contrasting
train_15792
Intuitive, high frequency words usually appear in multiple topics.
if we examine Equation 2, we can see that the anchor words are just points in V -dimension space; they do not actually have to correspond to any particular word so long as that point in space uniquely identifies a topic.
contrasting
train_15793
With just baseline anchors from Gram-Schmidt, the classification accuracy of Labeled Anchors is on par with that of Supervised Anchors using logistic regression as the downstream classifier.
because Labeled Anchors is fast enough to allow interaction, participants are able to improve classification accuracy on the development set by an average of 5.31%.
contrasting
train_15794
In topic modeling so far, perplexity is a direct optimization target.
topic coherence, owing to its challenging computation, is not optimized for and is only evaluated after training.
contrasting
train_15795
Among all datasets, we observed improved NPMI at the same perplexity level, validating the effectiveness of the topic coherence regularization.
on the NYTimes dataset, the improvement is quite marginal even though WETC improvements are very noticeable.
contrasting
train_15796
More recently, augmenting this neural architecture with the attention mechanism Luong et al., 2015) has dramatically increased the quality of results across most NLP tasks.
in text normalization, state-of-the-art results involving attention (e.g., Xie et al.
contrasting
train_15797
Arabic diacritization, which can be considered forms of text normalization, has received a number of neural efforts (Belinkov and Glass, 2015;Abandah et al., 2015).
state-of-the-art approaches for end-to-end text normalization rely on several additional models and rule-based approaches as hybrid models (Pasha et al., 2014;Nawar, 2015;Zalmout and Habash, 2017), which introduce direct human knowledge into the system, but are limited to correcting specific mistakes and rely on expert knowledge to be developed.
contrasting
train_15798
While tuning scheduled sampling, we found that introducing a sampling probability provided better results than relying on the ground truth, i.e., teacher forcing (Williams and Zipser, 1989).
introducing a schedule did not yield any improvement as opposed to keeping the sampling probability constant and unnecessarily complicates hyperparameter search.
contrasting
train_15799
In image processing, simple augmentation techniques such as flipping, cropping, or increasing and decreasing the contrast of the image are both widely utilized and highly effective (Huang et al., 2016;Zagoruyko and Komodakis, 2016).
it is nontrivial to find simple equivalences for NLP tasks like machine translation, because even slight modifications of sentences can result in significant changes in their semantics, or *: Equal contributions.
contrasting