id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_21600
We observe that all the baselines significantly drop in performance as we reduce the proportion of labeled training set.
the drop happens at a much slower rate for our selftrained model.
contrasting
train_21601
, use a search engine and a QA model to answer the simple questions, from which we compute the final answer a. Neural models trained over large datasets led to great progress in RC, nearing human-level performance .
analysis of models revealed (Jia and Liang, 2017;Chen et al., 2016) that they mostly excel at matching questions to local contexts, but struggle with questions that require reasoning.
contrasting
train_21602
The first four utilities all return a value by performing the named math operation on its two input arguments.
sum(function,condition) returns the sum of the values of FOL function instances which can be unified with the first argument (i.e., function) and satisfy the second argument (i.e., condition).
contrasting
train_21603
The studies include age, gender prediction (Marquardt et al., 2014;Sap et al., 2014), psychological well being (Dodds et al., 2011;Choudhury et al., 2013), and a host of other behavioural, psychological and medical phenomena .
a few works exist which analyze these factors using socio-economic characteristics of the Twitter users.
contrasting
train_21604
Surprisingly, we found that average intelligent users are past-oriented but considering the sentiment dimension they seem to be more future-positive.
this should be validated with further investigation.
contrasting
train_21605
Interestingly, we found a negative correlation with the future-positive.
we found a positive correlation (0.3614) with the future-neutral which suggests that the users with much above intelligence are futuristic and they express a neutral view.
contrasting
train_21606
We found similar patterns of relative goodness of measures: WW was consistently better in scoring similarity between two words and WC was better in measuring the thematic relatedness.
the asymmetry between WC and CW did not come out clearly in these experiments and the overall performance of the GloVe model in the similarity task was much lower than Skipgram.
contrasting
train_21607
Indeed, a given sentence-level operation could both change the original meaning by adding or removing information (affecting the P score) and increase simplicity (S).
the identity transformation perfectly preserves the meaning of the original sentence without making it simpler.
contrasting
train_21608
One problem with soft attention is that it considers all entity vectors when constructing the topic vector.
not all entities are important and necessary when generating summaries.
contrasting
train_21609
A quick read of the original article tells us that the main topic of the article is all about the two political parties arguing over the deal with Iran.
the entity "nuclear" appeared a lot in the article, which makes the soft model wrongly focus on the "nuclear" entity.
contrasting
train_21610
This is a major advantage of DUC compared to other datasets, especially when evaluating with ROUGE (Lin, 2004b,a), which was designed to be used with multiple references.
dUC datasets are small, which makes it difficult to use them as training data.
contrasting
train_21611
For instance, a summary might contain many individual words from the article and therefore have a high coverage.
if arranged in a new order, the words of the summary could still be used to convey ideas not present in the article.
contrasting
train_21612
We observe that publications with lower compression ratio (top-left of the figure) exhibit higher diversity along both dimensions of extractiveness.
as the median compression ratio increases, the distributions become more con- centrated, indicating that summarization strategies become more rigid.
contrasting
train_21613
Semantic Parsing Results SP results are summarized in Table 2.
the neural models, especially those with biasing and copying, strongly outperform all other models and are competitive with related work.
contrasting
train_21614
Consistent with recent findings (Dong and Lapata, 2016), we show that relatively simple neural sequence models are competitive with, and in some cases outperform, traditional grammar-based SP methods on bench-mark SP tasks.
this result is not observed in our technical documentation task, in part because this problem is much harder for neural learners given the sparseness of the target data and lack of redundancy.
contrasting
train_21615
In this way, we can extract accurate relation mentions for triples with high precision.
if a relation mention doesn't contain any hyponym/synonym words of the relation, our method would be unable to identify it.
contrasting
train_21616
Pairs of unprovable sub-goals and plausible single premises are identified by means of a variable unification routine and then linguistic relations between their logical predicates are checked using lexical knowledge such as Word-Net and VerbOcean (Chklovski and Pantel, 2004).
this mechanism is limited to capturing word-to-word relations within a sentence pair.
contrasting
train_21617
To increase their coverage of phrasal knowledge, the system combines a resolution strategy to align clauses and literals in a sentence pair and a statistical classifier to identify their semantic relation.
this strategy only considers one possible set of alignments between fragments of a sentence pair, possibly causing inaccuracies when there are repetitions of content words and meta-predicates.
contrasting
train_21618
The SNLI dataset (Bowman et al., 2015) contains inference problems requiring phrasal knowledge.
it is not concerned with logically challenging expressions; the semantic relationships between a premise and a hypothesis are often limited to synonym/hyponym lexical substitution, replacements of short phrases, or exact word matching.
contrasting
train_21619
In contrast, memory networks are exactly designed to remember previous information.
given the large size of documents and paragraphs, basic memory networks do not handle well irrelevant and noisy information, which we confirmed in our experiments.
contrasting
train_21620
In particular, gradient tree boosting has been shown to be highly competitive for ED in recent work (Yang and Chang, 2015;Yamada et al., 2016).
although achieving appealing results, existing gradient-tree-boosting-based ED systems typically operate on each individual mention, without attempting to jointly resolve entity mentions in a document together.
contrasting
train_21621
Joint entity disambiguation has been shown to significantly boost performance when used in conjunction with other machine learning techniques (Ratinov et al., 2011;Hoffart et al., 2011).
how to train a global gradient tree boosting model that produces coherent entity assignments for all the mentions in a document is still an open question.
contrasting
train_21622
Their experiments on the OAEI benchmarks show that their techniques, even when combined with classical NLP techniques, could not outperform the state-of-theart.
we refine pre-trained word embeddings with the intention of leveraging a new word vector set that is tailored to the ontology matching task.
contrasting
train_21623
Schema.org is a collaborative, community activity with a mission to create, maintain, and promote schemas for structured data on the Internet, on web pages, in email messages, and beyond.
dBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web.
contrasting
train_21624
This operation clearly reduces the model complexity, and can learn intragroup features efficiently.
it fails to capture dependency cross different groups.
contrasting
train_21625
Vlachos and Riedel (2014) constructed a dataset for claim verification consisting of 106 claims, selecting data from fact-checking websites such as PolitiFact, taking advantage of the labelled claims available there.
in order to develop claim verification components we typically require the justification for each verdict, including the sources used.
contrasting
train_21626
A differently motivated but closely related dataset is the one developed by Angeli and Manning (2014) to evaluate natural logic inference for common sense reasoning, as it evaluated sim-ple claims such as "not all birds can fly" against textual sources -including Wikipedia -which were processed with an Open Information Extraction system (Mausam et al., 2012).
the claims were small in number (1,378) and limited in variety as they were derived from eight binary ConceptNet relations (Tandon et al., 2011).
contrasting
train_21627
The RTE component must correctly classify a claim as NOTENOUGHINFO when the evidence retrieved is not relevant or informative.
the instances labeled as NOTENOUGHINFO have no evidence annotated, thus cannot be used to train RTE for this class.
contrasting
train_21628
We observe that with fewer than 6000 training instances, the accuracy of DA is unstable.
with more data, its accuracy increases with respect to the log of the number of training instances and exceeds that of MLP.
contrasting
train_21629
We also experimented with PPMI and smoothed PPMI with α = 0.75 (Levy et al., 2015) that are commonly used in word embedding.
the learned textual relation embedding turned out to be not very helpful for relation extraction.
contrasting
train_21630
We compare our models to that of Gerber and Chai (2012).
their original logistic regression model used many features based on gold annotation from FrameNet, PropBank and Nom-Bank.
contrasting
train_21631
The TempRel extraction task has a strong dependency on prior knowledge, as shown in our earlier examples.
very limited attention has been paid to generating such a resource and to make use of it; to our knowledge, the TEMPROB proposed in this work is completely new.
contrasting
train_21632
They have been used for a wide variety of tasks, such as textual entailment (Berant et al., 2011), question answering (Fader et al., 2014), and knowledge base population (Angeli et al., 2015).
perhaps due to limited data, existing methods use semisupervised approaches (Banko et al., 2007;Wu and Weld, 2010), or rule-based algorithms (Fader et al., 2011;Mausam et al., 2012;Del Corro and Gemulla, 2013).
contrasting
train_21633
Like QA-SRL, QAMR represents predicate-argument structure with a set of question-answer pairs about a sentence, where each answer is a span from the sentence.
while QA-SRL restricts questions to fit into a particular verb-centric template, QAMR is more general, allowing any natural language question that begins with a wh-word and contains at least one word from the sentence.
contrasting
train_21634
Hence, the written forms of the two languages should be very similar, which makes the language embeddings based on language modelling highly similar to one another.
when the embeddings are fine-tuned on a task taking orthography as well as phonology into account, this is no longer the case.
contrasting
train_21635
Figure 1 shows that the language embeddings of Norwegian Bokmål and Danish diverge from each other, which is especially striking when comparing to the converging with the typologically much more distant languages Tagalog and Finnish.
the absolute difference between Norwegian Bokmål and both Tagalog/Finnish is still greater than that of Norwegian Bokmål and Danish even after 3,000 iterations.
contrasting
train_21636
Just as A * approximates h with a "heuristic"ĥ, the next section will approximate H t using a neural estimateĤ t (equations (5)-(6)).
the specific form of our approximation is inspired by cases where H t can be computed exactly.
contrasting
train_21637
Reduce probabilities are computed conditioned on positions k and j, which are accessible through the dynamic program deduction rule.
the shift probabilities cannot be computed at the shift rule for deducing [j − 1, j], as it does not have access there to the top of the stack.
contrasting
train_21638
We observe the same pattern for the embedding space mapping approach for noise reduction against the narrow window embeddings.
combining the extension with the embedding space mapping methods, along with the LSTMbased character embeddings, results in the best performing system.
contrasting
train_21639
The orange points in Figure 4 show the performance of this experiment with different context sizes k. We observe that including shuffled distant words is substantially better than truncating them completely.
shuffling does cause performance to degrade relative to the base parser even when the unshuffled win- dow is moderately large, indicating that the LSTM is propagating information that depends on the order of words in far-away positions.
contrasting
train_21640
More advanced methods, such as GloVe, set non-uniform weights for observed entries to reflect their confidence.
the time complexity of their algorithm is proportional to number of nonzero weights (|(i, j) | C ij = 0|), thus they have to set zero weights for all the unobserved entries (C ij = 0 for Ω − ), or try to incorporate a small set of unobserved entries by negative sampling.
contrasting
train_21641
As the corpus size grows, the performance of all models improves, and the PU-learning model consistently outperforms other methods in all the tasks.
with the size of the corpus increases, the difference becomes smaller.
contrasting
train_21642
We chose that lexicon since we have extra information available for its entries that we want to examine, namely polar intensity ( §4.1.1) and sentiment views ( §4.1.2).
since we noted that the Subjectivity Lexicon misses some prototypical abusive words (e.g.
contrasting
train_21643
Among the three intensity types, the most effective one is the person-based intensity (INT person ).
it can be effectively combined with the remaining types.
contrasting
train_21644
These approaches are motivated from an information extraction perspective, for instance in aiding tasks such as knowledge base population.
1 it has not been studied whether such sophisticated author commitment analysis can go beyond what is expressed in language and reveal the underlying social contexts in which language is exchanged.
contrasting
train_21645
O' Barr (1982) analyzed courtroom interactions and identified hedges and hesitations as some of the linguistic markers of "powerless" speech.
there has not been any computational work which has looked into how power relations relate to the level of commitment expressed in text.
contrasting
train_21646
As discussed earlier, this is expected since superiors issue more requests (as found by (Prabhakaran and Rambow, 2014)), the propositional heads of which would be tagged as NA by the belief tagger.
our hypothesis H.1 is proven false.
contrasting
train_21647
VRB Verbosity (e.g., message count) PST Positional (e.g., thread initiator?)
tHR thread structure (e.g., reply rate) DIA Dialog act tagging (e.g., request count) ODP Overt displays of power LEX Lexical ngrams (lemma, POS, mixed ngrams) None of the features used in POWERPREDIC-tOR use information from the parse trees of sentences in the text in order to accurately obtain the belief labels, deep dependency parse based features are critical (Prabhakaran et al., 2010).
contrasting
train_21648
Automatic evaluations, such as those based on word deletions, are frequently used since they enable rapid iterations and are easy to reproduce.
it is unclear to what extent they correspond with human-based evaluations.
contrasting
train_21649
Similarly, for Neural Language.
machine Translation and Language model are globally appeared in the input document collections over time and captured in the topics by RNN-RSm and DTm.
contrasting
train_21650
As the only large human-annotated corpus for NLI currently available, the Stanford NLI Corpus (SNLI;Bowman et al., 2015) has enabled a good deal of progress on NLU, serving as a major benchmark for machine learning work on sentence understanding and spurring work on core representation learning techniques for NLU, such as attention (Wang and Jiang, 2016;Parikh et al., 2016), memory (Munkhdalai and Yu, 2017), and the use of parse structure (Mou et al., 2016b;Bowman et al., 2016;Chen et al., 2017).
sNLI falls short of providing a sufficient testing ground for machine learning models in two ways.
contrasting
train_21651
One of the mainstream approaches to this task is to exploit the lexico-syntactic paths connecting two target words, which reflect the semantic relations of word pairs.
this method requires that the considered words co-occur in a sentence.
contrasting
train_21652
Thus, in the neural path-based method, paths(w 1 , w 2 ) for these word pairs is padded with an empty path, like UNK-lemma/UNK-POS/UNK-dep/UNK-dir.
this process makes path-based classifiers unable to distinguish between semanticallyrelated pairs with no co-occurrences and those that have no semantic relation.
contrasting
train_21653
We test this property on HyperLex , a gold standard dataset for evaluating how well word representation models capture graded LE, grounded in the notions of concept (proto)typicality (Rosch, 1973;Medin et al., 1984) and category vagueness (Kamp and Partee, 1995;Hampton, 2007) As shown by the high inter-annotator agreement on HyperLex (0.85), humans are able to consistently reason about graded LE.
10 current state-of-the-art representation architectures are far from this ceiling.
contrasting
train_21654
The VISUAL model outperforms SLQS-SIM.
its numbers on BLESS (0.88), WBLESS (0.75), and BIBLESS (0.57) are far from the topperforming LEAR vectors (0.96, 0.92, 0.88).
contrasting
train_21655
WORD2GAUSS (Vilnis and McCallum, 2015) represents words as multivariate K-dimensional Gaussians rather than points in the embedding space: it is therefore naturally asymmetric and was used in LE tasks before, but its performance on Hy-perLex indicates that it cannot effectively capture the subtleties required to model graded LE.
note that the comparison is not strictly fair as WORD2GAUSS does not leverage any external knowledge.
contrasting
train_21656
It is therefore evident that this method is not informative in terms of the crosslingual properties of AMR.
its simplicity makes it a compelling engineering solution for parsing other languages.
contrasting
train_21657
Given only this sentence, or this sentence and a strict surface syntax representation that does not indicate elided predicates, this is a challenging task.
given a dependency graph that reconstructs the elided predicate for each conjunct, the problem becomes much easier and methods developed to extract information from dependency trees of clauses with canonical structures are much more likely to extract the correct information from a gapped clause.
contrasting
train_21658
If AMR and DG were very similar in how they represent information, such correspondences would probably hold between subgraphs consisting of a single edge, as in figure 1 cat nmod:poss →my ∼ cat poss →I.
aMR by design abstracts away from syntax and it should not be assumed that all mappings will be so clean.
contrasting
train_21659
11 Such an attitude reflects decades of work in the syntax-semantics interface (Partee, 2014) and the utility of dependency syntax for other forms of semantics (e.g., Oepen et al., 2014;Reddy et al., 2016;Stanovsky et al., 2016;White et al., 2016;Zhang et al., 2017;Hershcovich et al., 2017).
this assumption has not been empirically tested, and as Bender et al.
contrasting
train_21660
One measure of similarity between AMR and DG graphs is the configuration of the most complex subgraph alignment between them.
configuration a:b is higher than c:d if a + b > c + d. all configurations involving 0 are lower than those which do not.
contrasting
train_21661
In our graph-based parsing setting, we do not have a notion of parse history or partial derivations that directly connect intransitive and transitive verbs.
syntactic analogies still hold to a considerable degree in the vector representations of supertags induced by our joint models, with average rank of the correct answer nearly the same as that obtained in the transition-based parser.
contrasting
train_21662
Copulas A copula is usually treated as a dependent to the predicate both in our TAG grammar (adjunction) and UDR.
we found two situations where they differ from each other.
contrasting
train_21663
For example, it gives (be, legislation, cop) in "... on how much social legislation there should be."
our TAG grammar analyzes that "there" is attached to "be" with label 0.
contrasting
train_21664
Compared to KN, the 5-gram LSTM can generalize to unseen ngrams thanks to its embedding layer and recurrent connections.
it cannot discover longdistance dependency patterns that span more than five words.
contrasting
train_21665
Compared to many existing works that apply either metric-based or optimization-based meta-learning to image domain with low inter-task variance, we consider a more realistic setting, where tasks are diverse.
it imposes tremendous difficulties to existing state-of-the-art metric-based algorithms since a single metric is insufficient to capture complex task variations in natural language domain.
contrasting
train_21666
Due to such a simplified setting, almost all previous works employ a common meta-model (metric-/optimization-based) for all few-shot tasks.
this setting is far from the realistic scenarios in many real-world applications of few-shot text classification.
contrasting
train_21667
Moreover, these few-shot tasks are usually constructed by sampling from one huge dataset, thus all the tasks are guaranteed to be related to each other.
in real-world applications, the few-shot learning tasks could be diverse: there are different tasks with varying number of class labels and they are not guaranteed to be related to each other.
contrasting
train_21668
For example, in early stages where 10% or 20% of the information is available it is necessary to model very short length documents, which tend to produce sparse and low discriminative representations.
late stages require to exploit as much evidence as possible to make accurate predictions.
contrasting
train_21669
In this paper, we thus propose Multinomial Adversarial Networks (henceforth, MANs) for the task of multi-domain text classification.
to standard adversarial networks (Goodfellow et al., 2014), which serve as a tool for minimizing the divergence between two distributions (Nowozin et al., 2016), MANs represent a family of theoretically sound adversarial networks that, in contrast, leverage a multinomial discriminator to directly minimize the divergence among multiple probability distributions.
contrasting
train_21670
Denote the annotated corpus in a labeled domain d i 2 L as X i ; and (x, y) ⇠ X i is a sample drawn from the labeled data in domain d i , where x is the input and y is the task label.
for any domain d i 0 2 , denote the unlabeled corpus as U i 0 .
contrasting
train_21671
To make fair comparisons, the previous experiments follow the standard settings in the literature, where the widely adopted Amazon review dataset is used.
this dataset has a few limitations.
contrasting
train_21672
The prior art of MDTC (Wu and Huang, 2015) decomposes the text classifier into a general one and a set of domain-specific ones.
the general classifier is learned by parameter sharing and domain-specific knowledge may sneak into it.
contrasting
train_21673
A representation learning method should encode the review structure (e.g.
the role of the terms at first and ) in order to uncover the sentiment.
contrasting
train_21674
Co-training methods exploit predicted labels on the unlabeled data and select samples based on prediction confidence to augment the training.
the selection of samples in existing co-training methods is based on a predetermined policy, which ignores the sampling bias between the unlabeled and the labeled subsets, and fails to explore the data space.
contrasting
train_21675
Concretely, we introduce a joint formulation of a Q-learning agent and two co-training classifiers.
to previous predetermined data sampling methods of co-training, we design a Q-agent to automatically learn a data selection policy to select high-quality unlabeled examples.
contrasting
train_21676
As for our cases, when the two classifiers are initialized with different labeled seeding sets, they can be very unstable.
after enough iterations with the properly selected unlabeled data, the performance would be stable generally.
contrasting
train_21677
Usually, the more substantial labeled training datasets will lead to more stable models.
the problem is that the AGs News and DBpedia have 4 and 14 classes separately, while the Clickbait dataset only has 2 classes.
contrasting
train_21678
In both cases, the segmentation model is trained only in monolingual data, which may result in units that are not suitable for translation.
there have been multiple efforts to build models operating purely at the character level (Ling et al., 2015a;Yang et al., 2016;Lee et al., 2017).
contrasting
train_21679
While overlarge embedding sizes hurt accuracy because of overfitting issues, smaller sizes are not Test Set tst2011 tst2012 tst2013 tst2014 total RNN (Gulcehre et al., 2015) 18 preferable because of insufficient representation power.
our dense models show that with better model design, the embedding information can be well concentrated on fewer dimensions, e.g., 64.
contrasting
train_21680
SliceNet-Full matches our result, and SliceNet-Super outperforms by 0.58 BLEU score.
both models have 2.2x more parameters than our model.
contrasting
train_21681
// Encore quelques bugs/#insectes... Machine translation (MT) systems typically translate sentences independently of each other.
certain textual elements cannot be correctly translated without linguistic context, which may appear outside the current sentence.
contrasting
train_21682
But it is remarkable that the only failure to beat the baseline in terms of BLEU is when the algorithm is tasked with placing four random constraints (before BPE) with a beam size of 5.
dBA never has any trouble placing phrasal constraints (dashed lines).
contrasting
train_21683
While there is no explicit training towards the metric, BLEU, modeling in machine translation assumes that better model scores correlate with better BLEU scores.
a general repeated observation from the NMT literature is the disconnect between model score and BLEU score.
contrasting
train_21684
Neural machine translation (NMT) (Bahdanau et al., 2014;Sennrich et al., 2016a;Wang et al., 2017b) is now the state-of-the-art in machine translation, due to its ability to be trained end-toend on large parallel corpora and capture complex parameterized functions that generalize across a variety of syntactic and semantic phenomena.
it has also been noted that compared to alternatives such as phrase-based translation (Koehn et al., 2003), NMT has trouble with lowfrequency words or phrases (Arthur et al., 2016;Kaiser et al., 2017), and also generalizing across domains (Koehn and Knowles, 2017).
contrasting
train_21685
As we can see, the search engine retrieval time is negligible and the increase of NMT decoding time caused by our method is also small.
collecting translation pieces needed considerable time, although our implementation was in Python and could potentially be significantly faster in a more efficient programming language.
contrasting
train_21686
They use a hierarchical long-short term memory (LSTM) architecture for the discriminator.
to their approach, we apply the CNN-based discriminator for the machine translation task.
contrasting
train_21687
With the increasing of N , the translation performance of the model gets improved.
with N set larger than 20, we get little improvement than the model with N set as 20 and the training time exceeds our expectation.
contrasting
train_21688
Neural Machine Translation (NMT) with attentional encoder-decoder architectures Bahdanau et al., 2015) has revolutionised machine translation, and achieved state-of-the-art for several language pairs.
nMT is notorious for its need for large amounts of bilingual data (Koehn and Knowles, 2017) to achieve reasonable translation quality.
contrasting
train_21689
They report the best performance with fully sharing the encoder.
our architecture uses partial sharing on deep stacked encoder and decoder components, and the results show that it is critical for NMT improvement in MTL.
contrasting
train_21690
One of the reasons for their effectiveness is their ability to capture relevant source-side contextual information at each time-step prediction through an attention mechanism.
the target-side context is solely based on the sequence model which, in practice, is prone to a recency bias and lacks the ability to capture effectively nonsequential dependencies among words.
contrasting
train_21691
Additionally, variational NMT (Zhang et al., 2016a) introduces a latent variable to model the underlying semantics of source sentences.
to these studies, we focus instead on the contextual information on the target side.
contrasting
train_21692
In contrast to the first attention function, which makes use of the hidden vector s t , the second one is based only on the previous word representations, therefore, it is independent of the current prediction representation.
the normalization of this function still depends on t. To compare our approach with similar studies, we adapted two representative self-attentive networks for application to NMT.
contrasting
train_21693
They also demonstrated improvements over an earlier context-free baseline model (Jiampojamarn et al., 2008).
they did not evaluate on ambiguous forms, nor directly compare context-sensitive and context-free versions of their own model.
contrasting
train_21694
As expected, there is a strong negative correlation between the percentage of unseen words and the accuracy of Lematus 20-Ch: the rank correlation is R = −0.73 (p < 0.001; we use rank correlation because it is less sensitive to outliers than is linear correlation, and the plot clearly shows several outliers.)
to our original prediction, however, Lematus 20-Ch is actually more accurate for languages with greater ambiguity (R = 0.44, p = 0.05).
contrasting
train_21695
We also consider the CoNLL 2003 dataset (Tjong Kim Sang and De Meulder, 2003) as it has been used as the standard dataset for NER benchmarks.
we emphasize that both datasets present significantly different challenges and, thus, some relevant aspects in CoNLL 2003 may not be that relevant in the WNUT 2017 dataset.
contrasting
train_21696
For the OOV problem, we use FastText to provide vectors to 2,333 words (around 13% of the vocabulary).
the ablation experiment shows a small improvement, which suggests that those words did not substantially contribute to the meaning of the context.
contrasting
train_21697
Tying weights in NLM: Reusing embeddings in word-level neural language models is a technique which was used earlier (Bengio et al., 2001;Mnih and Hinton, 2007) and studied in more details recently (Inan et al., 2017;Press and Wolf, 2017).
not much work has been done on reusing parameters in subword-aware or subwordlevel language models.
contrasting
train_21698
2015, this can significantly reduce the total number of parameters for large models trained on huge datasets (1B tokens) with large vocabularies (800K tokens).
we do not expect significant reductions on smaller data sets (1-2M tokens) with smaller vocabularies (10-30K tokens), which we use in our main experiments.
contrasting
train_21699
If an embedding model has n layers, there are 2 n ways to reuse them, as each layer can either be tied or untied at input and output.
there are two particular configurations for each of the embedding models that do not interest us: (i) when neither of the layers is reused, or (ii) when only the very first embedding layer is reused.
contrasting