id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_16900
(2015) propose a sparse coding method to generate summaries that not only cover key content in news but also focuses highlighted by readers' comments.
they do not consider semantic information.
contrasting
train_16901
But they do not tackle sequential context information among sentences and treat them as separate instances.
we deal with sequential context information within each document and the relationships among documents.
contrasting
train_16902
In fact, one can also try to use a CNN to get t d .
our experiments suggest RNN performs better than CNN.
contrasting
train_16903
Cao and Clark (2019) factor the generation process leveraging syntactic information to improve the performance.
they linearize both AMR and constituency graphs, which implies that important parts of the graphs cannot well be represented (e.g., coreference).
contrasting
train_16904
On the same dataset, we have competitive results to Damonte and Cohen (2019).
we do not rely on preprocessing anonymisation not to lose semantic signals.
contrasting
train_16905
By penalizing such inconsistencies, the model enables the generation of more consistent outputs.
perfect consistency between attention weights occasionally disturbs the model to generate proper outputs.
contrasting
train_16906
It would, however, take the expertise of at least a teacher of English and also a computer engineer to achieve it; the former would have to think of what is typical in the given topic and then to make a set of rules; the latter then would have to turn them into computer-readable forms.
the neural retrieval-based method only requires a teacher of English to annotate a given corpus with feedback comments, without examining what is typical, which is much more effective and efficient.
contrasting
train_16907
On the one hand, it would require a successful parsing to recognize the sources of the errors.
it would require successful error detection to parse them correctly.
contrasting
train_16908
Compared with rule-based approaches, neural models (Yuan et al., 2017) can generate more fluent and grammatical questions.
question generation is a one-to-many sequence generation problem, i.e., several aspects can be asked given a sentence, which confuses the model during train and prevents concrete automatic evaluation.
contrasting
train_16909
From this correlation analysis against human judgments, we observe that, as expected, the Language Model metric captures readability better than ROUGE, while falling short on relevance.
the results obtained using the proposed QA-based metrics indicate their potential benefits especially under the unsupervised setting, with QA conf and QA f score capturing readability and relevance better than all the others reported metrics, including ROUGE.
contrasting
train_16910
To elaborate further, we notice that applying the learned coefficients for 1 to the results obtained by models reinforced on QA learned and QA equally , see Table 2, we obtain very similar scores (namely, 136.43 for QA equally and 136.4 for QA learned ).
the qualitative analysis reported in Tables 3 and 4 shows that while they perform sim-ilarly in terms of relevance, a significantly lower score for readability is obtained using QA equally .
contrasting
train_16911
Nema et al., 2018) using well-designed data encoder and attention mechanisms.
as demonstrated in Wiseman et al.
contrasting
train_16912
Improving the parser and deriving a more semantically-aware set of compression rules can help achieving better grammaticality and readability.
we note that such errors are largely orthogonal to the core of our approach; a more refined set of compression options could be dropped into our system and used without changing our fundamental model.
contrasting
train_16913
Without the manual deduplication mechanism, our model matches the ground truth around 80% of the time.
a low accuracy here may not actually cause a low final ROUGE score, as many compression choices only affect the final ROUGE score by a small amount.
contrasting
train_16914
A natural solution to the data-scarcity issue is to resort to massive data from other domains.
directly leveraging abundant data from other domains is problematic due to the discrepancies in data distribution on different domains.
contrasting
train_16915
This enables the model to learn generic style information from both domains.
explicitly learning precise stylized information within each domain is crucial to generate domain-specific styles.
contrasting
train_16916
Existing approaches focus on encoding the passage, the answer and the relationship between them using complex functions and then generate the question in one single pass.
by carefully analysing the generated questions, we observe that these approaches tend to miss one or more of the important aspects of the question.
contrasting
train_16917
Over 68.6%, 66.7% and 64.2% of the generated questions from RefNet were respectively more fluent, complete and answerable when compared to the EAD model.
there are some cases where EAD does better than RefNet.
contrasting
train_16918
It guarantees the maximum volume size of selected points with minimum number of points (Figure 2 (c)).
it does not reduce a redundancy between the points over the convex-hull, and usually choose larger number of sentences than k. Marcu (1999) shows an interesting study regarding an importance of sentences: given a document, if one deletes the least central sentence from the source text, then at some point the similarity with the reference text rapidly drops at sudden called the waterfall phenomena.
contrasting
train_16919
In most evaluations, ROUGE scores are linear to SO ratios as expected.
vO has high variance across algorithms and aspects.
contrasting
train_16920
One thing to note is that XSum and AMI have less new words in their target summaries.
paper datasets (i.e., PeerRead and PubMed) include a lot, indicating that abstract text in academic paper is indeed "abstract".
contrasting
train_16921
LexRank is highly biased toward the position aspect.
mmR is extremely biased to the importance aspect on XSum and Reddit.
contrasting
train_16922
(2018) investigate how to evaluate semi-supervised training algorithms in a realistic way; they differ from us in that they focus exclusively on semi-supervised learning (SSL) algorithms, and do not consider NLP explicitly.
in line with our conclusion, they report that recent practices for evaluating SSL techniques do not address the question of the algorithms' real-word applicability in a satisfying way.
contrasting
train_16923
As we will see in later sections, this is one of the main sources of search errors.
in many cases, the model score found by beam search is a reasonable approximation to the global best model score.
contrasting
train_16924
ELMo and BERT improve naive baselines by a large margin, indicating that a notable amount of commonsense knowledge has been acquired via pre-training.
even BERT still falls far behind human performance, indicating the need of further research.
contrasting
train_16925
Standard accuracy metrics indicate that modern reading comprehension systems have achieved strong performance in many question answering datasets.
the extent these systems truly understand language remains unknown, and existing systems are not good at distinguishing distractor sentences, which look related but do not actually answer the question.
contrasting
train_16926
Question answering tasks are widely used for training and testing machine comprehension and reasoning (Rajpurkar et al., 2016;Joshi et al., 2017).
high performance in standard automatic metrics has been achieved with only superficial understanding, as models exploit simple correlations in the data that happen to be predictive on most test examples.
contrasting
train_16927
Intuitively, the model is expected to choose the answer span after fully considering the entire question and paragraph.
traditional QA models suffered the overstability problem, and tended to be fooled by distractor answers, such as the one containing an unrelated human name.
contrasting
train_16928
and then search the target entity America from KGs as the answer.
as many KGs are constructed automatically and face serious incompleteness problems (Bordes et al., 2013), it is often hard to directly get target entities for queries.
contrasting
train_16929
These demonstrate that LM-based methods perform very well on the associative sentences, as expected.
their performance drops significantly on the non-associative subset, when information related to the candidates themselves does not give away the answer.
contrasting
train_16930
It shows that Pun-GAN can generate more vivid pun sentences compared with the previous best model CLM+JD.
there still exists a big gap between generated puns and human-written puns.
contrasting
train_16931
Sentences that describe varying levels of respect for a demographic tend to contain more adjectives that are strongly indicative of the overall sentiment.
sentences describing occupations are usually more neutrally worded, though some occupations are socially perceived to be more positive or negative than others.
contrasting
train_16932
expresses conflict sentiment towards ambience aspect.
most of existing studies ignore conflict opinions, for the reason that they are sparse in the datasets (Tang et al., 2016b;He et al., 2018).
contrasting
train_16933
Recently, researches have explored the graph neural network (GNN) techniques on text classification, since GNN does well in handling complex structures and preserving global information.
previous methods based on GNN are mainly faced with the practical problems of fixed corpus level graph structure which do not support online testing and high memory consumption.
contrasting
train_16934
cific points in the sequence when computing its output.
in this case, the attention attends the wrong context, as there are many words have no correlation or do not correspond to actual words.
contrasting
train_16935
Such approaches may achieve high-quality extraction and labeling.
they rely on extracted PDF source markup (not always available, e.g.
contrasting
train_16936
Aletras and Stevenson (2013) devised a new method by mapping the topic words into a semantic space and then computing the pairwise distributional similarity (DS) of words in that space.
the semantic space is still built on PMI or NPMI.
contrasting
train_16937
In recent years, Variational Autoencoder (VAE) has been proved more effective and efficient to approximating deep, complex and underestimated variance in integrals (Kingma and Welling, 2013;He et al., 2017).
the VAE-based topic models focus on the construction of deep neural networks to approximate the § The two authors contributed equally to this work.
contrasting
train_16938
When performing cross-language information retrieval (CLIR) for lower-resourced languages, a common approach is to retrieve over the output of machine translation (MT).
there is no established guidance on how to optimize the resulting MT-IR system.
contrasting
train_16939
The BM25 model was evaluated against both the Europarl and Wikipedia collections.
to avoid the performance degradation caused by crosscollection evaluation (Cohen et al., 2018), we only evaluate the Wikipedia-trained neural model on the Wikipedia evaluation collection.
contrasting
train_16940
We conclude that none of the n-gram precision components of BLEU or variations on it provide consistently better correlations with IR performance.
given a specific collection and model, it is likely one of the alternatives we explored here or other metrics (e.g.
contrasting
train_16941
(2017) built on this work, exploring several strategies for rotating embeddings to obtain more semantically meaningful dimensions.
to our knowledge, orthogonal transformations themselves have not been used to represent word relationships; our work is novel in this respect.
contrasting
train_16942
(2017a) convert WSD task to a sequence labeling task, thus building a unified model for all polysemous words.
neither of them can totally beat the best word expert supervised methods.
contrasting
train_16943
So far, we have obtained the adjective-noun pairs and value polarity relations for nouns.
it is still unclear whether the polarity is positive or negative.
contrasting
train_16944
While the meanings of defining words are important in dictionary definitions, it is crucial to capture the lexical semantic relations between defined words and defining words.
thus far, the utilization of such relations has not been explored for definition modeling.
contrasting
train_16945
Word analogy test results show that word representations trained with pre-trained Chinese n-grams perform better than those trained without (SISG(cjhr)), supporting our claim that our approach is able to transfer relevant knowledge from the Chinese language for detecting analogical relationships.
for the word similarity test, word vectors trained without Chinese embeddings perform better, suggesting that there are some trade-offs.
contrasting
train_16946
Previous works addressing this challenge mainly focused on word-level aspects such as word embeddings.
in many cases, languages share common subwords, especially for closely related languages, but also for languages that are seemingly irrelevant.
contrasting
train_16947
One popular approach is to estimate the relation between noisy and clean, gold-standard labels and use this noise model to improve the training procedure.
most of these approaches only assume a dependency between the labels and do not take the features into account when modeling the label noise.
contrasting
train_16948
Increasing the number of clusters introduces smaller clusters for which it is difficult to estimate the noise matrix, due to the limited training resources.
decreasing the number of clusters can generalize too much, resulting in loss of information on the noise distribution.
contrasting
train_16949
GPA was recently used to jointly transform multiple languages into a shared vector space (Kementchedjhieva et al., 2018).
gPA assumes that a multi-way word correspondence is available, which is often not the case.
contrasting
train_16950
Note that MGPA needs a multi-way dictionary constructed from the bilingual dictionaries.
mPPA uses directly the raw data (the bilingual dictionaries).
contrasting
train_16951
Adversarial training is a popular method to ensure the transferred sentences have the desired target styles.
previous works often suffer from content leaking problem.
contrasting
train_16952
(2018) also made use of a conditional discriminator for multiple style transfer.
a few works including, Li et al.
contrasting
train_16953
The optimized network is inferred by choosing the edges with maximum weights in softmax.
dARTS is a "local" model because the softmax-based relaxation is imposed on each bundle of edges between two nodes.
contrasting
train_16954
Bilinear models such as DistMult and ComplEx are effective methods for knowledge graph (KG) completion.
they require large batch sizes, which becomes a performance bottleneck when training on large scale datasets due to memory constraints.
contrasting
train_16955
This technique is possible for standard benchmarks but not for large KGs, and we report results in Appendix D for all datasets small enough to allow for full contrastive training.
our main experiments use NLL of sampled softmax since our focus is on scalability.
contrasting
train_16956
As shown in Table 6, while AE achieves the best reconstruction when the noise is small (k = 1), its reconstruction deteriorates dramatically when k > 1, which suggests AE fails to learn a smooth latent space.
our method outperforms all the baselines by a large margin when k > 1.
contrasting
train_16957
BIOBERT is trained on PubMed abstracts and PMC full text articles, and CLIN-ICALBERT is trained on clinical text from the MIMIC-III database .
sCIBERT is trained on the full text of 1.14M biomedical and computer science papers from the semantic scholar corpus (Ammar et al., 2018).
contrasting
train_16958
We find that combining the sparse global gradient with the dense local gradient improves convergence.
adding local information means that nodes' parameters will diverge over time.
contrasting
train_16959
mMiniBERT Effectiveness The multilingual baseline mMeta-LSTM does not do well on lowresource languages.
mMiniBERT performs well and outperforms the state-of-the-art Meta-LSTM on the POS tagging task and on four out of size languages of the Morphology task.
contrasting
train_16960
Deep learning has achieved great success in the SLU field (Mesnil et al., 2015;Liu and Lane, 2016;Zhao et al., 2019).
it is notorious for requiring large labelled data, which limits the scalability of SLU models.
contrasting
train_16961
We observe no significant trend of favoring one branching direction over the other.
after training with the language modeling objective, PaLM-U shows a clear right-skewness more than it should: it produces much more right-branching structures than the gold annotation.
contrasting
train_16962
When we manually change the sentence pattern into "List the most common hometown of teachers", the parser gives the correct keyword.
the characterbased model is less sensitive to question sentences, which is likely because characters are less sparse compared with words.
contrasting
train_16963
In their work, they compare EigenSent with various sentence embedding models, including a different implementation of the Discrete Cosine Transform (DCT*).
to our implementation described in section 2.2, DCT* is applied at the word level along the word embedding dimension.
contrasting
train_16964
These models encode and contextualize sentences in two consecutive steps.
we propose an input representation which allows the Transformer layers in BERT to directly leverage contextualized representations of all words in all sentences, while still utilizing the pretrained weights from BERT.
contrasting
train_16965
(2017) is that they use the ROUGE scores to label the top (bottom) 20 sentences as positive (negative), and the rest are neutral.
we found it better to train our model to directly predict the ROUGE scores, and the loss function we used is Mean Square Error.
contrasting
train_16966
We observe that before finetuning, the attention patterns on [SEP] tokens and periods is almost identical between sentences.
after finetuning, the model attends to sentences differently, likely based on their different role in the sentence that requires different contextual information.
contrasting
train_16967
achieving 65.6% success rate between agents that never interacted with each other.
when the inter-group interaction occurred only half as frequently as the intra-group interaction, the agents from the two groups can play together with a much lower 52.4% success rate.
contrasting
train_16968
To the best of our knowledge, ours is the first work which builds end-to-end models on a large-scale dataset for topic-focused summarization.
generating Wikipedia to our work focusing on content selection for topic-focused summaries, there have been previous work interested in generating Wikipedia articles.
contrasting
train_16969
Another series of works focus on template-based methods such as (Oya et al., 2014).
template-based methods are too rigid for our patternized summary generation task.
contrasting
train_16970
On one hand, the sections in the prototype summary that are not highly related to the prototype document are the universal patternized words and should be emphasized when generating the new summary.
the sections in the prototype document that are highly related to the prototype summary are useful facts that can guide the process of extracting facts from input document.
contrasting
train_16971
With the IB objective (eq 3), there is no benefit to keeping any information from Z, which strictly makes the first term worse (more mutual information between source and summary) and does not affect the second (Z is unrelated to Y ).
1 In IB, this is a strict statistical relationship.
contrasting
train_16972
Thus, these supervised models do not generalize well to other kinds of sentence summarization or domains.
our method is applicable to any domain for which examples of the inputs to be summarized are available in context.
contrasting
train_16973
We also tried applying more complex transitions to − → x i like diagonal mapping (Trouillon et al., 2016), but did not observe improvements.
another option is to estimate this leads to poor alignment and performance drop because y t is not explicitly grounded on x i 3 .
contrasting
train_16974
Neural attention models (Bahdanau et al., 2015) with the seq2seq architecture (Sutskever et al., 2014) have achieved impressive results in text summarization tasks.
the attention vector comes from a weighted sum of source information and does not model the source-target alignment in a probabilistic sense.
contrasting
train_16975
Pointer generators are slightly better as it is trained to directly copy keyword from the source.
once it starts to enter the generation mode ("of british" in example 2 and "has been arrested" in example 3), the generation also loses control.
contrasting
train_16976
For this reason, our decomposition alone may not be very beneficial if coupled with standard attention.
our structured-attention model consistently performs much better than both baselines.
contrasting
train_16977
This kind of re-categorization has been shown to have considerable effects on the performance Guo and Lu, 2018).
one issue is that the precise set of re-categorization rules differs among different models, making it difficult to distinguish the performance improvement from model optimization or carefully designed rules.
contrasting
train_16978
In contrast, our method shows a strong capacity in capturing the main idea "the solution is about some patterns and a balance".
on the ordinary Smatch met- ric, their graph obtains a higher score (68% vs. 66%), which indicates that the ordinary Smatch is not a proper metric for evaluating the quality of capturing core semantics.
contrasting
train_16979
6 The reason is that the random order potentially produces a larger set of training pairs since each random order strategy can be considered as a different training pair.
the deterministic order stabilizes the maximum likelihood estimate training.
contrasting
train_16980
One prominent approach for data collection has been to automatically generate pseudo-language paired with logical forms, and paraphrase the pseudo-language to natural language through crowdsourcing (Wang et al., 2015).
this data collection procedure often leads to low performance on real data, due to a mismatch between the true distribution of examples and the distribution induced by the data collection procedure.
contrasting
train_16981
Knowledge Graphs (KGs) such as Freebase and DBpedia have shown their strong power in many natural language processing tasks including question answering and dialog generation (Zhou et al., 2018).
these KGs are far from complete.
contrasting
train_16982
MultiR (Hoffmann et al., 2011) and MIMLRE (Surdeanu et al., 2012) introduce multi-instance learning where the instances mentioning the same entity pair are processed at a bag level.
these methods rely heavily on handcrafted features.
contrasting
train_16983
Fortunately, the automatically constructed lexicon contains rich word boundaries information and word semantic information.
integrating lexical knowledge in Chinese NER tasks still faces challenges when it comes to self-matched lexical words as well as the nearest contextual lexical words.
contrasting
train_16984
Since without word boundaries information, it is intuitive to use character information 1 The code is available at https://github.com/ DianboWork/Graph4CNER only for Chinese NER (He and Wang, 2008;Liu et al., 2010;Li et al., 2014), although such methods could result in the disregard of word information.
word information is very useful in Chinese NER, because word boundaries are usually the same as named entity boundaries.
contrasting
train_16985
We observe significant improvements on distantly supervised datasets (i.e., KBP and NYT), with a up to 19% relative F1 improvement (Bi-GRU from 37.77% to 45.01% on KBP).
on the human-annotated corpus, the performance gain can be hardly noticed.
contrasting
train_16986
Many existing relation extraction (RE) models make decisions globally using integer linear programming (ILP).
it is nontrivial to make use of integer linear programming as a blackbox solver for RE.
contrasting
train_16987
Currently, research efforts have derived useful discrete features from dependency structures (Sasano and Kurohashi, 2008;Cucchiarelli and Velardi, 2001;Ling and Weld, 2012) or structural constraints (Jie et al., 2017) to help the NER task.
how to make good use of the rich relational information as well as complex long-distance interactions among words as conveyed by the complete dependency structures for improved NER remains a research question to be answered.
contrasting
train_16988
Empirically, we also found that those correctly retrieved entities of the DGLSTM-CRF (compared against the baseline) mostly correlate with the following dependency relations: "nn", "nsubj", "nummod".
dGLSTM-CRF achieves lower precisions against BiLSTM-CRF, which indicates that the dGLSTM-CRF model makes more false-positive predictions.
contrasting
train_16989
Note that, the bilingual data needed for this approach is coarsely taken from the same domain.
the texts need not be aligned beyond this coarse level.
contrasting
train_16990
We also experimented with the following neural network architectures for the classification: LSTM-RNN (Hochreiter and Schmidhuber, 1997), HAN (Yang et al., 2016), QRNN (Bradbury et al., 2017), and VDCNN (Conneau et al., 2017).
these models did not achieve any substantial performance gain to justify their additional complexity.
contrasting
train_16991
However, they are limited to short-text classification tasks.
our model effectively uses contextual information and combines recurrent and projection operations to achieve efficiency and enable learning more powerful neural networks that generalize well and can solve more complex language classification tasks.
contrasting
train_16992
Previous methods have proposed to overcome this by relying on character-level embeddings and other neural models like character-CNNs.
these methods are often complex and slow to compute for long text (e.g., convolution kernels on devices without significant computational capacity) and still require explicitly storing character or sub-word sequences.
contrasting
train_16993
In such challenging scenarios, recent studies have used meta-learning to simulate the few-shot task, in which new queries are compared to a small support set at the samplewise level.
this sample-wise comparison may be severely disturbed by the various expressions in the same class.
contrasting
train_16994
Such non-parametric models only need to learn the representation of the samples and the metric measure.
instances in the same class are interlinked and have their uniform fraction and their specific fractions.
contrasting
train_16995
We have a large labeled training set with a set of classes C train .
after training, our ultimate goal is to produce classifiers on the testing set with a disjoint set of new classes C test , for which only a small labeled support set will be available.
contrasting
train_16996
Style of a text is a very general notion that is hard to define in rigorous terms (Xu, 2017).
the style of a text can be characterized quantitatively (Hughes et al., 2012); stylized texts could be generated if a system is trained on a dataset of stylistically similar texts (Potash et al., 2015); and authorstyle could be learned end-to-end (Tikhonov and Yamshchikov, 2018b,c;.
contrasting
train_16997
Indeed, the output that copies input gives maximal BLEU yet clearly fails in terms of the style transfer.
a wholly rephrased sentence could provide a low BLEU between input and output but high accuracy.
contrasting
train_16998
Learning-based approaches alone, or combined with lexicon-based methods, usually produce state-of-the-art (SOTA) performance (Mudinas et al., 2012;Zhang et al., 2018), and thus are widely used nowadays.
learning-based methods usually demand a large amount of annotated data to train models, which has become one of the performance bottlenecks.
contrasting
train_16999
BERT uses a cross-encoder: Two sentences are passed to the transformer network and the target value is predicted.
this setup is unsuitable for various pair regression tasks due to too many possible combinations.
contrasting