id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_22200
|
For puns, both the local and global surprisal should be positive because they are unusual sentences by nature.
|
the global surprisal should be lower than the local surprisal due to topic words hinting at the pun word.
|
contrasting
|
train_22201
|
SURGEN sometimes generates creative puns that are rated even funnier than human-written puns (example 1).
|
nEURALJOInTDECODER at best generates ambiguous sentences (example 2 and 3) and sometimes the sentences are ungrammatical (example 1) or hard to understand (example 4).
|
contrasting
|
train_22202
|
Given sentence vector s i as input, SUMO computes: r i , e ij = TMT(r i ,ẽ ij ) (11) Iterative Structure Refinement SUMO essentially reduces summarization to a rooted-tree parsing problem.
|
accurately predicting a tree in one shot is problematic.
|
contrasting
|
train_22203
|
t 0 1 2 3 4 w t -Bernie Sanders for president c t 2 1 0 0 0 Unlike the controlled-length scenario, at test time we do not know the number of new content words to generate.
|
the count for most FTFYs is between 1 and 5, inclusive, so we can exhaustively search this range during decoding.
|
contrasting
|
train_22204
|
In a programming subreddit, a sarcastic FTFY might be this is a strange feature.
|
in a Pokémon subreddit, an FTFY might be this is a strange dinosaur in an argument over whether Armaldo is a bug or a dinosaur.
|
contrasting
|
train_22205
|
5It is not possible to treat the missing annotations as "total disagreement" because per Krippendorff (1995), α U has no concept of this; there is no lowest disagreement score.
|
are lower than the agreement among extensively trained annotators reported by Stab and Gurevych (2014) (α U = 0.7726, 0.6033, 0.7594).6 they are broadly comparable to interannotator agreement scores reported in similar (and in some cases, even simpler) discourse-level argument annotation studies with expert-trained annotators, such as Aharoni et al.
|
contrasting
|
train_22206
|
Furthermore, we observe that a main event may trigger several consequent events which themselves are causally related.
|
causal relations involving only non-main events are less likely to show transitivity.
|
contrasting
|
train_22207
|
(2009) proposed a matrix factorization method based on SVD to learn latent representations of users and items from the rating matrix between users and items.
|
since the numbers of users and items in online platforms are usually huge, and the rating matrix between users and items is usually very sparse, it is quite diffi- cult for those rating based recommendation methods to learn accurate user and item representations (Zheng et al., 2017;Tay et al., 2018).
|
contrasting
|
train_22208
|
These methods usually concatenate the reviews from the same user or the same item into a long document.
|
different reviews usually have different informativeness in representing users and items.
|
contrasting
|
train_22209
|
ESA is usually trained on Wikipedia, since the authors of the original ESA paper suggest that the articles of the training corpus should represent disjoint concepts, which is only guaranteed for encyclopedias.
|
stein and Anerka (Gottron et al., 2011) challenged this hypothesis and demonstrated that promising results can be obtained by applying EsA on other types of corpora like the popular Reuters newspaper corpus as well.
|
contrasting
|
train_22210
|
In our scenario with rather short text snippets and keyword lists, this was not much of an issue.
|
for large documents, such a comprehensive comparison could become soon infeasible.
|
contrasting
|
train_22211
|
With these in mind, we make two important observations about the existing keyphrase extraction techniques: • In the supervised setting, word importance is captured in metrics and engineered features, as is local random walk scores.
|
the structure of the graph formed by the text is not exploited.
|
contrasting
|
train_22212
|
Enhancements to the model introduce much sophisticated local information aggregation between the node pairs as in Graph Attention Networks (GAT) Veličković et al..
|
we note that such prior methods fall inherently into the classification paradigm, and hence focus on only local aggregation; i.e., to pull in the most significant feature from its neighbors.
|
contrasting
|
train_22213
|
This state-of-the-art technique exploits a complementary idea of sequential semantic modeling focused on generating keyphrases rather than merely extracting them.
|
their model does not address the common scenario of keyphrase extraction from long documents but only for short excerpts (namely, the abstract).
|
contrasting
|
train_22214
|
This attention is normalized in GAT as: to smooth the gradients.
|
to GCN, GAT replaces theà with a learnedà where each entry α ij is a normalized score computed on each node pair by the gradient (Fig.
|
contrasting
|
train_22215
|
In this way, such a node channels more to its neighbor, exerting comparatively steeper gradients to less essential nodes, hence giving a larger chance for Node h 1 to be considered a keyphrase.
|
without Glocal's scaling mechanism, such modeling is not captured and essentially unlearnable.
|
contrasting
|
train_22216
|
It lacks only 0.15 of a MAP point on test set as compared to the highest scoring LSP-AP model.
|
as LSP does not optimize the ranking measure directly, it may result unstable.
|
contrasting
|
train_22217
|
It is also true that in this work, by fixing the weighting schema v, we limited our study to one particular case of a structural ranking representation.
|
finding an appropriate structural feature space, e.g., to the extent enabled by tuning the positional weights v j for the particular application, can be potentially beneficial.
|
contrasting
|
train_22218
|
(2015) proposed a neural implementation of multi-instance learning to leverage multiple sentences which mention an entity pair in distantly supervised relation extraction.
|
their model picks only one sentence to represent an entity pair, which wastes the information in the neglected sentences.
|
contrasting
|
train_22219
|
We can consider it as a sub-domain of sentiment analysis.
|
the goal of sentiment analysis is to classify the polarity of a tweet sentiment based on its contents, whereas identification of stance is dependent on the specific target.
|
contrasting
|
train_22220
|
The NMT achieved especially high performance in terms of fluency.
|
it tends to generate more omission errors than statistical machine translations generate.
|
contrasting
|
train_22221
|
Near 0.0, the distance between T and R is small.
|
in the automatic MT evaluation metrics, the score is close to 1.0 when the evaluation for the translation is generally high.
|
contrasting
|
train_22222
|
This word-level loss ensures efficient and scalable training of seq2seq models.
|
this word-level training objective suffers from a few crucial limitations, namely the label bias, the exposure bias, and the loss-evaluation mismatch (Lafferty et al., 2001; Figure 1: The BLEU score of BSO decreases after beam size 3 as results of increasing length ratio 1 in German→English translation.
|
contrasting
|
train_22223
|
The beam search restarts with gold when gold falls off the beam (at step 5).
|
without using locally-normalized score does not mean that we should stop using the probabilistic value function.
|
contrasting
|
train_22224
|
After convergence, Z m l is added to the multilingual space X m : In supervised settings this approach would be impractical as it requires bilingual dictionaries D k,l for all language pairs k, l, and not only with the hub language.
|
within an unsupervised framework this constraint is lifted.
|
contrasting
|
train_22225
|
We observe that value dropping is crucial for SHS to succeed for English-Finnish.
|
it is not necessary for IHS.
|
contrasting
|
train_22226
|
While simple to motivate, this may not always perform well because neural methods benefit from randomization in the minibatches and multiple epochs.
|
the probabilistic curriculum (Bengio et al., 2009) works by dividing the training procedure into distinct phases.
|
contrasting
|
train_22227
|
We can then conclude that perplexity selection may not be an appropriate way to determine the optimal amount of unlabeled-domain data to use for NMT models.
|
if computational resources are limited, according to the experiment results ( Figure 2) in our work, we recommend 1024k as the first choice for cutoffs on ranked unlabeled-domain data, for NMT domain adaptation models trained with curriculum learning strategy.
|
contrasting
|
train_22228
|
From Table 4 we note that the Baseline + FT w/ EP-100k-TBT model already produces a reasonable translation for the input sentence.
|
if we further fine-tune the model using only 10k MTNT data, we note that the model still struggles with generation of *very*.
|
contrasting
|
train_22229
|
With more gains arising from continued research on new neural network architectures and accompanying train-ing techniques (Vaswani et al., 2017;Gehring et al., 2017;Chen et al., 2018), NMT researchers, both in industry and academia, have doubled down on their ability to train high capacity models on large corpora with gradient based optimization.
|
despite huge improvements in overall translation quality NMT has shown some glaring weaknesses, including idiom processing, and rare word or phrase translation (Koehn and Knowles, 2017;Isabelle et al., 2017;Lee et al., 2018) tasks that should be easy if the model could retain learned information from individual training examples.
|
contrasting
|
train_22230
|
Existing approaches have proposed using off the shelf search engines for the retrieval stage.
|
our objective differs from traditional information retrieval, since the goal of retrieval in semiparametric NMT is to find neighbors which might improve translation performance, which might not correlate with maximizing sentence similarity.
|
contrasting
|
train_22231
|
Average and ensemble techniques on checkpoints can lead to further performance improvement.
|
as these methods do not affect the training process, the system performance is restricted to the checkpoints generated in the original training procedure.
|
contrasting
|
train_22232
|
VBSIX applies the same traversal in execution space (lines 10-21).
|
since each vertex in the execution space represents an execution result and not a particular prefix, we need to modify the scoring function.
|
contrasting
|
train_22233
|
In SCENE, each component has only a slight advantage over beam-search, and therefore both are required to achieve significant improvement.
|
in ALCHEMY and TANGRAM most of the gain is due to the value network.
|
contrasting
|
train_22234
|
(2018) assumed ac-cess to the full annotated logical form.
|
we trained from longer sequences, keeping the logical form and intermediate states latent.
|
contrasting
|
train_22235
|
We find that when there are many differences between the current and target world, the value network correctly estimates low expected reward in 87.0% of the cases.
|
when there is just one mismatch between the current and target world, the value network tends to ignore it and erroneously predicts high reward in 78.9% of the cases.
|
contrasting
|
train_22236
|
This case is simpler than restrictive advice, since the human operator just has to provide the direction to adjust the predictions, rather than the precise region of the coordinates.
|
the performance does worsen (M5 vs M6).
|
contrasting
|
train_22237
|
model would have made a prediction ('x') close to the true block (square).
|
the advice region (blue) was incorrect (due to the true block being close to the edge of it) and this led to a significantly worse prediction (circle).
|
contrasting
|
train_22238
|
This task is inherently discriminative, i.e., there is only a single most relevant clip pertaining to Figure 1: Clip Extraction task for the given query 'the biker jumps to another ramp near the camera' a given query in the corresponding video.
|
most prior works (Hendricks et al., 2017(Hendricks et al., , 2018Chen et al., 2018;) explore this as a ranking task over a fixed number of moments by uniformly sampling clips within a video.
|
contrasting
|
train_22239
|
(2019) address this through a query-guided segment proposal network (QSPN).
|
the similarity metric used by these approaches is difficult to learn as it is sensitive to the choice of negative samples (Yu et al., 2018) and it still does not consider the discriminative nature of the task.
|
contrasting
|
train_22240
|
Furthermore, if we add LSTM span predictors along with the video LSTM (ExCL 2-{b, c}) we obtain an additional boost in performance.
|
without a recurrent visual encoder, a recurrent span predictor is essential to capture both uni-modal and cross-modal interactions (ExCL 1-{b, c}).
|
contrasting
|
train_22241
|
unimodal visual model for German verb sense disambiguation, but we find the opposite for Spanish unimodal verb sense disambiguation.
|
the early fusion multimodal model outperforms the best unimodal model for both German and Spanish.
|
contrasting
|
train_22242
|
We also observe that most uncertainty comes from morphological categories such as noun number, noun definiteness (which is expressed morphologically in Bulgarian), and verb tense, all of which are inherent (Booij, 1996) 8 and typically cannot be predicted from sentential context if they do not participate in agreement.
|
9 aspect, although being closely related to tense, is well-predicted since it is mainly expressed as a separate lexeme.
|
contrasting
|
train_22243
|
Finally, our analysis of case category prediction on nouns shows that more common cases such as the nominative, accusative, and genitive are predicted better, especially in languages with fixed word order.
|
cases that appear less frequently and on shifting positions (such as the instrumental), as well as those not associated with specific prepositions, are less well predicted.
|
contrasting
|
train_22244
|
They argue that summarize-then-translate is preferable to avoid both the computational expense of translating more sentences and sentence extraction errors caused by incorrect translations.
|
summarize-then-translate can only be used when the source language is high-resource (Wan et al.
|
contrasting
|
train_22245
|
The explicit use of syntactic information has been proved useful for neural machine translation (NMT).
|
previous methods resort to either tree-structured neural networks or long linearized sequences, both of which are inefficient.
|
contrasting
|
train_22246
|
Automating this process with meta-learning is thus an attractive proposition.
|
it comes with many potential pitfalls such as failing to match a human-designed curriculum, or significantly increasing training time.
|
contrasting
|
train_22247
|
The RL reward that directly corresponds to this goal would be the highest likelihood value reached during an NMT training run.
|
as we use only one NMT training run, having a single reward per run is infeasible.
|
contrasting
|
train_22248
|
We also tried computing the diagonal ofF on held-out data, as 5 github.com/thompsonb/sockeye_ewc there is some evidence that estimating Fisher on held out data reduces overfitting in natural gradient descent (Pascanu and Bengio, 2013).
|
we again found no meaningful differences.
|
contrasting
|
train_22249
|
We also experimented with personal pronouns as a proxy for persons.
|
we found them unsuitable since they are also often used as referring expressions to other entities, such as animals.
|
contrasting
|
train_22250
|
The pretrained model is then fine-tuned to the target task.
|
the fine-tuning procedure of the language model to the target task does not include an auxiliary objective.
|
contrasting
|
train_22251
|
Our proposed model is closely related to ULMFiT.
|
uLMFiT trains a LM and fine-tunes it to the target dataset, before transferring it to a classification model.
|
contrasting
|
train_22252
|
The resulting MAE was 0.450 for factuality and 1.184 for bias prediction, which is slightly better then our results (yet, very comparable for factuality).
|
our goal here is to emphasize the advantages of modeling the two tasks jointly.
|
contrasting
|
train_22253
|
To maximize their benefit to the scientific community, it is crucial to understand and evaluate the construction and limitation of reviews themselves.
|
minimal work has been done to analyze reviews' content and structure, let alone to evaluate their qualities.
|
contrasting
|
train_22254
|
This alone-that cross-domain performance can be so strikingly worse-is a significant result, providing the first estimate of how performance degrades across these domains for this task.
|
when we train an identically parameterized model on the training partition of the literary data and evaluate it on the literary test partition, performance naturally improves substantially to an F-score of 68.3.
|
contrasting
|
train_22255
|
Previous research on automated abusive language detection in Twitter has shown that communitybased profiling of users is a promising technique for this task.
|
existing approaches only capture shallow properties of online communities by modeling followerfollowing relationships.
|
contrasting
|
train_22256
|
The embeddings (called author profiles) are generated by applying a node embedding framework to an undirected unlabeled community graph where nodes denote the authors and edges the follower-following relationships amongst them on Twitter.
|
these profiles do not capture the linguistic behavior of the authors and their communities and do not convey whether their tweets tend to be abusive or not.
|
contrasting
|
train_22257
|
We propose an approach for learning author profiles using GCNs applied to the extended graph.
|
to node2vec, our method allows us to additionally propagate information with respect to whether tweets composed by authors and their communities are abusive or not.
|
contrasting
|
train_22258
|
GCN on its own achieves a high performance, particularly on the sexism class where its performance is typical of a community-based profiling approach, i.e., high recall at the expense of precision.
|
on the racism class, its recall is hindered by the same factor that Mishra et al.
|
contrasting
|
train_22259
|
Additionally, word embeddings are insensitive to fine-grained semantic distinctions, such as antonymy, due to their construction nature.
|
the sense representations used in our experiments (DeConf) were constructed by exploiting the knowledge encoded in WordNet.
|
contrasting
|
train_22260
|
By fine-tuning on, in some cases, as few as 100 examples, both NLI models are able to recover almost the entire performance gap on both the word overlap and negation challenge datasets (Outcome 1).
|
both models struggle to adapt to the spelling error and length mismatch challenge datasets (Outcome 2).
|
contrasting
|
train_22261
|
Indeed, it seems unlikely that a model will learn to perform algebraic numerical reasoning based on as few as 50 NLI examples.
|
a closer look at this dataset provides a potential explanation for this finding.
|
contrasting
|
train_22262
|
To capture fine-grained features, on the one hand, some works are concerned with matching QA pairs relationship in a more complex and diverse way, e.g., CNTN (Qiu and Huang, 2015) and MV-LSTM(Wan et al., 2016).
|
latent representation models aim to jointly learn lexical and semantic information from QA sentences and influence the vector generation directly, e.g., attention mechanism (Bahdanau et al., 2015).
|
contrasting
|
train_22263
|
Relevant relationship in answer selection datasets is binary, only including relevance and irrelevance.
|
in the real CQA applications, it is difficult to verify whether the answers are completely correct or not.
|
contrasting
|
train_22264
|
Specifically, they do: where the ST estimator lets the gradients of the stochastic layer dz dg φ (x) ≈ 1.
|
the ST estimator is backpropagating with respect to the sample-independent.
|
contrasting
|
train_22265
|
This has led to several attempts to use GANs for text generation, with a generator using either a recurrent neural network (RNN) Guo et al., 2017;Press et al., 2017;Rajeswar et al., 2017), or a Convolutional Neural Network (CNN) (Gulrajani et al., 2017;Rajeswar et al., 2017).
|
evaluating GANs is more difficult than evaluating LMs.
|
contrasting
|
train_22266
|
In RNNbased GANs, the previous output token is used at inference time as the input x t Guo et al., 2017;Press et al., 2017;Rajeswar et al., 2017).
|
when evaluating with BPC or perplexity, the gold token x t is given as input.
|
contrasting
|
train_22267
|
The output distribution of the generator is expected to converge to the real data distribution during the training.
|
the discriminator f w is expected to discern real samples from generated ones by outputting zeros and ones, respectively.
|
contrasting
|
train_22268
|
We hypothesize that MRS is particularly well-suited for text generation, as it is explicitly compositional, capturing the contribution to sentence meaning of all parts of the surface form (Bender et al., 2015).
|
semantic representations such as Abstract Meaning Representation (AMR; Banarescu et al., 2013) seek to abstract away from the syntax of a sentence as much as possible.
|
contrasting
|
train_22269
|
The method of plan construction will likely not generalize "as is" to other datasets, and the plan structure itself may also be found to be lacking for more demanding generation tasks.
|
on a higher level, our proposal is very general: intermediary plan structures can be helpful, and one should consider ways of obtaining them, and of using them.
|
contrasting
|
train_22270
|
4 Based on this, we could represent each sentence and each plan as a sequence of entities, and verify the sequences match.
|
using this criterion is complicated by the fact that it is not trivial to map between the entities in the plan (that originate from the RDF triplets) and the entities in the text.
|
contrasting
|
train_22271
|
possible plans, making this method prohibitive for even moderately sized input graphs.
|
it is sufficient for the WebNLG dataset in which n ≤ 7.
|
contrasting
|
train_22272
|
Table 4 indicates that for decreasingly probable plans our realizer does worse in the first criterion.
|
for both parts of the test set, if the realizer managed to express all of the entities, it expressed them in the requested order, meaning the outputs are consistent with plans.
|
contrasting
|
train_22273
|
1 Increases in computing power and model capacity have made it possible to generate mostlygrammatical sentence-length strings of natural language text.
|
generating several sentences related to a topic and which display overall coherence and discourse-relatedness is an open challenge.
|
contrasting
|
train_22274
|
The KB provides information about the hotel's services: complimentary breakfast, free wifi, spa.
|
it may not include information about the menu/times for the breakfast, credentials for the wifi, or the cancellation policy for a spa appointment at the hotel.
|
contrasting
|
train_22275
|
Given the wide range of information that may be of interest to guests, it is not clear how to extend the KB in the most effective way.
|
the conversational logs, which many hotels keep, contain the actual questions from guests, and can therefore be used as a resource for extending the KB.
|
contrasting
|
train_22276
|
3 shows that subjective words that represents speakers attitude (e.g., "ready", "guests", "time") had significantly different embeddings in the question and answer encoders.
|
objective words such as menu, or activity (e.g., "bacon", "cruise", "weekday") had similar embeddings although the two encoders do not directly share the embedding parameters.
|
contrasting
|
train_22277
|
The extracted tuples from NEURON can be used to extend a KB for a specific domain.
|
automatically fusing the tuples with existing facts in Figure 4: Human-in-the-loop system for extending a domain-specific KB.
|
contrasting
|
train_22278
|
Recently, OPENIE systems based on end-to-end frameworks, such as sequence tagging (Stanovsky et al., 2018) or sequence-tosequence generation (Cui et al., 2018), have been shown to alleviate such engineering efforts.
|
all these systems focus on sentence-level extraction.
|
contrasting
|
train_22279
|
Alternatively, a CQA dataset could be transformed into declarative sentences (Demszky et al., 2018) for a conventional OPENIE system.
|
such a two-stage approach is susceptible to error propagation.
|
contrasting
|
train_22280
|
Even if (similarly to WIKIHOP creators) one considers a coarse-to-fine approach, where a set of potentially relevant documents is provided, re-encoding them in a query-specific way remains the bottleneck.
|
to other proposed methods (e.g., (Dhingra et al., 2018;Raison et al., 2018;Seo et al., 2016)), we avoid training expensive document encoders.
|
contrasting
|
train_22281
|
In particular, we observe a negative Pearson's correlation (-0.687) between accuracy and the number of candidate answers.
|
the performance does not decrease steeply.
|
contrasting
|
train_22282
|
Knowledge bases (KBs) (such as Freebase (Dong et al., 2015;Xu et al., 2016;Yao and Van Durme, 2014) or DBpedia (Lopez et al., 2010;Unger et al., 2012)) have been used for question answering (Yu and Lam, 2018).
|
the ever-changing nature of online businesses, where new products and services appear constantly, makes it prohibitive to build a highquality KB to cover all new products and services.
|
contrasting
|
train_22283
|
This amount is large enough to ameliorate the flaws of BERT that has almost no questions on the left side and no textual span predictions based on both the question and the document on the right side.
|
a small amount of finetuning examples is not sufficient to turn BERT to be more task-aware, as shown in Sec.
|
contrasting
|
train_22284
|
In Figure 1, for the question 'When did princess Diana die', Rematch correctly recognizes the relation die and links it to dbo:deathYear.
|
when the question slightly changed to "Where did princess Diana die?"
|
contrasting
|
train_22285
|
It is important to note that most of these approaches use state of the art machine learning techniques and require a large amount of training data.
|
when these tools applied to short text in a new domain such as question answering (QA) or key word based search, the performance is limited.
|
contrasting
|
train_22286
|
The scale and diversity of this dataset makes it particularly suited for use in training deeplearning models to solve word problems.
|
there is a significant amount of unwanted noise in the dataset, including problems with incorrect solutions, problems that are unsolvable without brute-force enumeration of solutions, and rationales that contain few or none of the steps required to solve the corresponding problem.
|
contrasting
|
train_22287
|
By augmenting our dataset with these formalisms, we are able to cover most types of math word problems 3 .
|
to other representations like simultaneous equations, our formalisms ensure that every problem-solving step is aligned to a previous one.
|
contrasting
|
train_22288
|
Discussions As we mentioned in section 3, the continuous nature of our formalism allows us to solve problems requiring systems of equations.
|
there are other types of word prob-Error type Problem Hard problems (45%) Jane and Ashley take 8 days and 40 days respectively to complete a project when they work on it alone.
|
contrasting
|
train_22289
|
And the gated recurrent unit is used to control the contribution of shortcut paths and path between adjacent characters.
|
as the study of shown, the gate mechanism fails to choose the right path sometimes.
|
contrasting
|
train_22290
|
Arabic text is typically written without short vowels (or diacritics).
|
their presence is required for properly verbalizing Arabic and is hence essential for applications such as text to speech.
|
contrasting
|
train_22291
|
They are a natural choice to handle unknown words.
|
bPE does not fit in our scenario as it may create source and target segments of different lengths.
|
contrasting
|
train_22292
|
Using a Transformer model led to nearly identical WER to using our NMT model with attention.
|
their results are somewhat complimentary.
|
contrasting
|
train_22293
|
Heuristics can improve incorporation of this data: a relabeling heuristic (Pair) helps on HEAD and a filtering heuristic (Overlap) is helpful in both settings.
|
our trainable filtering and relabeling models outperform both of these techniques.
|
contrasting
|
train_22294
|
The rule-based approach has the advantage of not requiring manual annotation, while also allowing easy access to adding and removing individual rules.
|
language is continuously evolving, and there are exceptions to most grammar rules we know.
|
contrasting
|
train_22295
|
Difficult cases (such as long distance subject-verb relations) are often ignored in order to ensure high precision, at the expense of the recall of the system.
|
our rulebased system is not limited to the detection of simple cases of SVA errors.
|
contrasting
|
train_22296
|
In this regard, as experimented in (Rei and Yannako-udakis, 2016), training on more data in the same domain is a valid solution for improving the performance of LSTM models.
|
when also adding artificially generated data to the training set, we reach higher scores only on 2 out of the 4 benchmarks.
|
contrasting
|
train_22297
|
We demonstrate that error generation is much less sensitive to parsing errors and irregularities than rule-based systems for detecting subject-verb agreement.
|
artificial error generation enables us to utilise much more training data, and therefore can develop more robust neural models for SVA error detection that do not overfit the available, manually annotated training data.
|
contrasting
|
train_22298
|
We employ 102 features obtained from WALS 9 related to word order and morphosyntactic alignment, further reduced to 50 dimensions using PCA.
|
none these criteria correlates significantly to tagging accuracy, as we elaborate in Section 5.1.
|
contrasting
|
train_22299
|
We explore model confidence, as measured by perplexity and typological similarities, as intuitive criteria for PL choice.
|
both criteria prove to be not correlated with tagging accuracy scores.
|
contrasting
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.