id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_9900
Other frameworks include ParGram (Butt et al., 2002), the MetaGrammar project (de La Clergerie, 2005) and Grammatical Framework (Ranta, 2004).
the Grammar Matrix provides a particularly suitable framework for our analyses because it requires its libraries to be typologically robust.
contrasting
train_9901
If the modifier attaches to a sentence, the matrix clause must already have a subject, signified by an empty subject list ([SUBJ ]).
if it attaches to a verb phrase, we constrain this list to be non-empty ([SUBJ [ ] ]) and if it can attach to either the VP or S, we leave this constraint underspecified.
contrasting
train_9902
This results in three sentences failing to parse.
upon constructing such a rule, we confirmed that if this rule were in the grammar, those sentences would parse correctly.
contrasting
train_9903
Recurrent neural networks have achieved great success in many NLP tasks.
they have difficulty in parallelization because of the recurrent structure, so it takes much time to train RNNs.
contrasting
train_9904
To solve this problem, several scholars try to use convolutional neural networks (CNNs) (Lecun et al., 1998) instead of RNNs in the field of NLP (Kim, 2014;Kalchbrenner et al., 2014;Gehring et al., 2017).
cNNs may not obtain the order information of the sequences, which is very important in NLP tasks.
contrasting
train_9905
Most of the researches get faster by improving the recurrent units.
the traditional connection structure has scarcely been questioned, in which each step is connected to its previous step.
contrasting
train_9906
Firstly, with only a few exceptions, much existing work focuses on "pairwise" MTL where there is a target task and one or several (carefully) selected auxiliary tasks.
can jointly learning many tasks benefit all of them together?
contrasting
train_9907
Moreover, at least one task significantly contributes to the success of All MTL at some point; if we remove it, the performance will drop.
cOM generally negatively affects the performance of All MTL as removing it often leads to performance improvement.
contrasting
train_9908
The following arguments and experiments are all with respect to the internal variability of K-fold CV.
as single train-test splits produce less stable prediction error estimates, our arguments are still valid for single data-splits but to an even greater degree.
contrasting
train_9909
Increasing K improves stability by averaging over more models.
each evaluation is performed on a smaller subset of the available data and so the evaluations themselves become less stable.
contrasting
train_9910
The combination of these competing variabilities makes up the total internal variance which, as confirmed by Figure 1, is nevertheless smaller than with single train-test splits.
if we have enough data that using 1/K th of the data still provides stable evaluations then we can see a reduction in internal variability as we increase K. this is not a general statement (Bengio and Grandvalet, 2004), as we will demonstrate in Section 4.
contrasting
train_9911
A further research interest is to investigate exactly when this assumption is violated and the effect that this can have on the effectiveness of parameter tuning.
increasing J has no effect on bias but does significantly reduce the internal variability.
contrasting
train_9912
When considering a larger incremental system, a processor usually seen as non-incremental might be incremental enough: If a language generation system works on the level of words, the grapheme to phoneme conversion does not need to be able to process sub-word input.
if input typed by a user should be vocalized, sub-word granularity might be needed.
contrasting
train_9913
Monotonicity limits the quality a component can produce as it can not revert a decision that turns out to be wrong later on in light of additional available input.
a non-monotonic component can always achieve the same non-incremental output as a non-incremental component by simply replacing all intermediate output with the one of the non-incremental component once all input is available.
contrasting
train_9914
As the manual transcription takes time, the system response is noticeably delayed in a non-incremental system where the dialogue system only starts to plan its response once transcription is complete.
the incremental system constructs a response as soon as possible, based on partial input.
contrasting
train_9915
Explicit corrections in spoken output are preferred by users to delay ( § 4.4), which could transfer to incremental MT.
lack of automatic evaluation likely requires an expensive human end-to-end evaluation due to the large deviation from non-incremental gold standard translations.
contrasting
train_9916
If the verb sense was neither in Propbank nor the list of previous decisions, annotators made note of the sense.
annotators were instructed to try to use the previous senses as much as possible.
contrasting
train_9917
Figure 1(a) shows a typical illustration of generation of t-th target word where the decoder takes three inputs, i.e., source context c t , previous target word y t−1 and current target state s t while generating one output y t via output state o t .
the study in suggests that different target words require inconsistent contributions from source context (c t ) and target context (i.e., y t−1 and s t ).
contrasting
train_9918
Both the reset gate r t and the update gate z t control the amount of previous state h t−1 and current input Ex t being carried over to the current hidden state h t .
the gate itself either r t or z t , is uniformly set via fixed weights W r (or W z ) and U r (or U z ) for all inputs of Ex t and all previous states of h t−1 .
contrasting
train_9919
This holds in particular for the systems that employ discriminative (and sometimes global) features, which are not available in our system.
also van Cranenburgh et al.
contrasting
train_9920
In the LSTM based model, the tokens are encoded/decoded using LSTM units, which can effectively summarize the temporal information of the sentences in source/target domains.
because of its recurrent nature, the LSTM based model is difficult to parallel, so the training efficiency becomes the major challenge.
contrasting
train_9921
(i) Most error types that appeared in English parsing also appeared in Vietnamese parsing.
the average errors per sentence in Vietnamese parsing were much larger than those in English, except for co-ordination.
contrasting
train_9922
The figure shows that the parsers' performance was improved when we eventually increased the training data from 1,337 to 6,337.
the F-score is almost saturated when the number of sentences was increased from 6,337 to 8,337 sentences.
contrasting
train_9923
The F-score of RNNGs-G especially achieved 78.2%, which was improved by 5.26% points in comparison with the use of an automatic POS tagger (PredictedPOS).
this figure does not indicate the error types that could be solved by improving POS tagging.
contrasting
train_9924
We can see that all of these four confusing POS pairs were potential contributors to bracketing errors that caused significant decreases in F-scores.
this table also indicates that the frequency of confusing POS pairs was not directly proportional to the contributions they made to parsing errors.
contrasting
train_9925
Our analysis results indicated that improving POS tagging was beneficial for some constructions, such as PP and clause attachments.
not all gold tags are helpful for solving the parsing errors, which means that the current 33 POS tags of the Vietnamese Treebank are not good enough for disambiguating constructions.
contrasting
train_9926
Neural models Mikolov et al., 2010;Józefowicz et al., 2016) have recently shown great improvement.
most of them share a common issue: a large output vocabulary implies a prohibitive computation time, due to the output normalization, along with a prediction challenge in a such high dimensional space.
contrasting
train_9927
This algorithm is theoretically proven to converge when the partition function is parametrized separately.
the scaling parameter is usually fixed and the un-normalized model therefore self-normalizes, as a side effect of the training procedure.
contrasting
train_9928
Since the goal is to approximate the data distribution with a model parametrized by θ, the conditional class distribution is defined as P (w|C = 1, H) = P θ (w|H) and P (w|C = 0, H) = P n (w), which gives the posterior class probabilities: and P (C = 0|w, H) = kP n (w) If σ denotes the sigmoid function, these equations can be rewritten as a logistic regression problem: The reformulation obtained in equation 7 shows that training a classifier based on a logistic regression will estimate the log-ratio of two distributions: this allows the learned distribution to be un-normalized, and the partition function to be parametrized separately 3 .
the partition function is context-dependent.
contrasting
train_9929
On the large corpus, the conventional NCE model doesn't even reach a perplexity under the size of the vocabulary, whereas augmenting k to 500 is not enough to reach a suitable perplexity.
increasing k reduces the computational benefit of NCE, even with k far inferior to the vocabulary size.
contrasting
train_9930
The rule based machine translation demands various kinds of linguistic resources such as morphological analyzer and synthesizer, syntactic parsers, semantic analyzers and so on.
corpus based approaches (as the name implies) require parallel and monolingual corpora.
contrasting
train_9931
As a result, notable progress towards the development and use of MT systems has been made for these languages.
research in the area of MT for Ethiopian languages, which are under-resourced as well as economically and technologically disadvantaged, has started very recently.
contrasting
train_9932
No single-data baseline exists for zero-shot translation since we have no parallel data for those pairs.
we can pivot by data (i.e.
contrasting
train_9933
Models reviewed so far address the problem of morphology on the source side.
there is a group of models which study the same problem for the target side.
contrasting
train_9934
This is motivated by the nature of the task at hand: a document that is composed of high quality sentences is likely to have high quality as well.
simply aggregating sentence-level predictions is not a good strategy, as a document needs to be cohesive and coherent as a whole, i.e.
contrasting
train_9935
Table 3 also confirms the hierarchy between models on the FN 1.5 setup, albeit not as clearly as originally reported, with differences between NN and WSABIE models closer to 2 points of accuracy rather than 3.
table 4 shows that the hierarchy between the NN + SENtBOW and NN + DEPBOW models are not robust across datasets, as the NN + DEPBOW model performs better than the NN + SENtBOW model on the FN 1.7 setup, by a margin of .6 accuracy point overall.
contrasting
train_9936
In other words, if more than 32 target sentences are available, on average one should prefer monolingual parsing (at least when using BEAM) over KL-BEAM transfer: 15 the amount of knowledge transferred across languages seems very limited.
when considering the simple/complex scores, it appears that their difference is less pronounced for the cross-lingual parser than the monolingual ones, which results in distinct data-size equivalents.
contrasting
train_9937
Modern RNN architectures have been very successful in solving character-level NLP tasks (Chung et al., 2016b;Golub and He, 2016;Mikolov et al., 2012).
they do not make the learned linguistic structure explicit: rather it can be presumed to be cryptically encoded in the states of the hidden layers.
contrasting
train_9938
If this is not the case and the lower layer does not detect a boundary, the layer runs COPY.
if the lower layer does detect a boundary z −1 t , it runs UPDATE: if z t−1 = 0 and z −1 Output gate -The output embedding h e t of the HMLSTM is computed by the following gating mechanism: h e t = ReLU(g t W e h t ) It takes states from the three layers and computes a gating value for each g 1 t , g 2 t and g 3 t using the parameter vector w .
contrasting
train_9939
When the HMLSTM cell runs the FLUSH operation, it computes the new cell-state by i t u t and the previous cell-state is dropped from the computation.
both i t and u t depend on the previous state h t−1 .
contrasting
train_9940
The discrepancy could be due to the fact that the script that the authors shared with us was optimized for PTB, and some of the settings might need to be changed for Text8.
since we have not found such differences reported in the paper, we used the same setting for both datasets.
contrasting
train_9941
The proposed method is built on bag-of-characters (BOC) representation.
bOC is prone to anagrams and thus is susceptible to word collisions, i.e.
contrasting
train_9942
Furthermore, it has the lowest standard deviation across all runs, which means it is robust to parameter initialization.
both CNN and RNN models have lower performance and higher variability compared to the proposed method.
contrasting
train_9943
This could be one of the reasons why models using CNN seem to have superior performance when only the best performance is reported.
our model does not result in peaks or serious drops in performance with different seed values, which makes it more suitable for real-world applications.
contrasting
train_9944
Technical support problems are very complex.
to regular web queries (that contain few keywords) or factoid questions (which are a few sentences), these problems usually include attributes like a detailed description of what is failing (symptom), steps taken in an effort to remediate the failure (activity), and sometimes a specific request or ask (intent).
contrasting
train_9945
Traditionally, this entails employing a large number of human agents to manually diagnose, troubleshoot, and resolve customer problems.
with ever increasing variety of products and the exponential growth of users, this model doesn't scale.
contrasting
train_9946
The key difference in these approaches and our problem setting is that they usually work on simpler, entity seeking questions.
questions in support are usually very complex, contain multiple attributes describing the situation that leads to a failure, attempts made to resolve the problem, and explicit asks.
contrasting
train_9947
We see a slight improvement in P@1 on the DB2 Prop dataset, and a 42.9% improvement in P@5.
on the DB2 Open dataset, we see very slight improvement on the P@1 and no improvement in P@5.
contrasting
train_9948
With the guidance of the information of global decoding, the model generates translation of higher accuracy and higher coherence.
as the deconvolution-based decoder is not responsible for generating translation, it is hard to interpret what each column of the generated matrix represents.
contrasting
train_9949
On one hand, in many different ways natural language is used to express the same information need.
the same word used in different sentences may express different meaning.
contrasting
train_9950
For a detailed description, we direct the reader to the original work.
we describe herein some elements which we have modified, a necessity given our changes to the taxonomy.
contrasting
train_9951
Entities that have been extracted through Wikification are often normalised "for free."
there is no simple way to get around this problem in the case of entities extracted through regular expressions, as in the case of numbers and dates where it is common for sentences to contain approximations.
contrasting
train_9952
Significant progress has been made in domains where a large amount of labeled training data is available.
obtaining rich annotated data is a time-consuming and expensive process, creating a substantial barrier for applying answer selection models to a new domain which has limited labeled data.
contrasting
train_9953
It has been widely studied and applied in many tasks (Yang and Mitchell, 2017;Liu et al., 2017).
its applicability to answer selection has yet to be Who played Dumbledore in Harry Potter?
contrasting
train_9954
Considering the example in Table 1, existing context-based models may assign a higher score to the negative answer than the positive answer, since the negative answer is more similar to the given question at word level.
with the background knowledge, we can correctly identify the positive answer based on the relative facts contained in the knowledge base (KB) such as (Dumbledore, played by, Michael Gambon), (Michael Gambon, cast in, Harry Potter).
contrasting
train_9955
Instead of learning the representations of the question and the answer separately, some recent studies exploit attention mechanisms to learn the interaction information between questions and answers, which can better focus on relevant parts of the input (Tan et al., 2016;.
these methods are subject to the amount of labeled data and the limited information provided by contexts.
contrasting
train_9956
Most recent studies in transfer learning for natural language processing employ deep neural networks to learn the shared feature representation between two different datasets (Mou et al., 2016;Li et al., 2017).
it was not until recent years that the application of transfer learning on QA received extensive attention.
contrasting
train_9957
We first employ n-gram matching to detect all the entity mentions in the sentence, and then retrieve a set of top-K entity candidates from KB for each entity mention.
the ambiguity issue of the entity is still remained to be tackled, e.g., Santiago can refer to a city or a person.
contrasting
train_9958
These annotators considered the question meaningless.
other annotators considered a scenario in which some teams could not finish the season.
contrasting
train_9959
Current state-of-the-art models also attempt to capture this by using the reading of the query to guide the reading of the document (Yang et al., 2017;Dhingra et al., 2017), or using the memory of the document to help interpret the query (Munkhdalai and Yu, 2017).
these systems only consider uni-directional dependencies.
contrasting
train_9960
At this scale, this dataset can support training and evaluating a machine learning-based fact checking system.
the usefulness of the dataset may be limited due the claims being provided without machine-readable evidence beyond originator metadata, meaning that systems can only resort to approaches to fact checking such as text classification or speaker profiling.
contrasting
train_9961
As with the claims collected by Vlachos and Riedel (2014), the size of the dataset prevents its use for training a machine learning-based fact checking system.
the broad range of types of claims in this dataset highlights a number of forms of misinformation to help identify the requirements for fact checking systems.
contrasting
train_9962
(2016) identify deceptive reviews also using lexical features.
rather than relying on labeled data, the authors induce labels over an unlabeled dataset through a semi-supervised learning approach that exploits a minimal amount labeled data from related tasks in a multi-task learning set up.
contrasting
train_9963
In the early stages of a rumour, its actual veracity tends to be unknown.
as new evidence emerges over time, Twitter users take more pronounced and continuously evolving stance towards the information asserted in the rumour.
contrasting
train_9964
For instance, in the early stages of a rumour supporting tweets might prevail, simply due to the lack of information to the contrary.
when authoritative sources or reliable evidence emerge either for or against the rumour, a similar trend tends to be observed in the collective rumour stances.
contrasting
train_9965
The dataset consists of rumours that emerged during eight different events.
three of these events evoked less than five rumours consisting of five or more tweets.
contrasting
train_9966
Further reducing sequences' length to five tweets leads to worse classification performance.
performance decrease is considerably lower for system λ with an F 1 score of 0.618 (λ: 0.524).
contrasting
train_9967
All tweets Table 2 are obtained using gold stance labels.
this restricts the idea of stance-based rumour verification to only data where human stance labels are available.
contrasting
train_9968
Precision Recall F 1 λ a 0.632 0.888 0.738 λ a 0.669 0.975 0.794 Table 5: Overall scores using automatic labels In our results in Table 2 we showed that overall collective stance indeed is an important feature to consider for the purpose of veracity prediction.
this depends on how this collective feature is used.
contrasting
train_9969
In case of baseline B2 we used rather a crude way of capturing the stance wisdom by counting different stance types.
collective stance might obey some specific patterns of development, as indicated by Mendoza et al.
contrasting
train_9970
Since we seem to have overcome the need for manual annotation when using MSHMM in future work the data sets can be extended to more recent events featuring potentially large amounts of tweets.
it is also reasonable to assume that events are heterogeneous in their stance distribution patterns, which might have an impact on classification performance and generalizability across events.
contrasting
train_9971
This is a known problem that there is a performance drop when the system moves to new unseen data.
the performance gap can be closed with extending training data (obtained either manually or using some distance learning) and focusing more on domain independent features.
contrasting
train_9972
Stance detection has been extensively studied in recent years as a task to predict a stance of a user or text toward a specific topic (Murakami and Raymond, 2010;Mohammad et al., 2016;Persing and Ng, 2016).
most studies have depended heavily on labeled training data (Tutek et al., 2016;Liu et al., 2016).
contrasting
train_9973
Accordingly, the SemEval-2016 task 6B released unlabeled data on a topic to be predicted and labeled data on other topics (Mohammad et al., 2016).
the accuracy on the setting dropped drastically compared with the setting when labeled data for the topic are given.
contrasting
train_9974
They asked the following questions to the users of social media: whether they are interested in specific issues (e.g., healthcare cost and retirement); and whether they have posted on those issues on Twitter or Facebook.
to their work covering only seven issues, we deal with more than 1000 topics.
contrasting
train_9975
Note that, it would be preferable to evaluate users who did not declare stances at all as the silent majority.
since it is virtually impossible for a third person to annotate stances of such users, we defined the pseudo silent majority for this experiment.
contrasting
train_9976
Therefore, our method is useful for analyzing opinions, including those of the silent majority.
the performance of the stance detection in this work is not yet sufficient.
contrasting
train_9977
Until now, computational approaches for fake news detection have relied on satirical news sources such as "The Onion" (Rubin et al., 2016), viral news tracking websites such as BuzzFeed (Potthast et al., 2017) and fact-checking websites such as "politiFact" (Wang, 2017) and "Snopes" (Popat et al., 2016).
the use of these sources poses several challenges and potential drawbacks.
contrasting
train_9978
The best classification performances were achieved with feature sets representing absurdity, punctuation, and grammar.
fact-checking approaches rely on automated verification of propositions made in the news articles (e.g., "Barack Obama assumed office on a Tuesday") to assess the truthfulness of their claims .
contrasting
train_9979
Although fact-checking approaches are becoming increasingly powerful, a major drawback is that they are built on the premise that the information can be verified using external sources, for instance FakeCheck.org and Snopes.com.
this is not a straightforward task, as external sources might not be available, particularly for just-published news items.
contrasting
train_9980
Specifically, legitimate news in tabloid and entertainment magazines seem to use more first person pronouns, talk about time (Relativity,Time, FocusPast), and use positive emotion words (posemo), which interestingly were also found as markers of truth-tellers in previous work on deception detection (Pérez-Rosas and Mihalcea, 2014).
fake content in this domain has a predominant use of second person pronouns (he, she), negative emotion words (negemo) and focus on the present (Foc.Pres).
contrasting
train_9981
(2015) and Enayet and El-Beltagy (2017) have created successful models using this premise.
these studies assume access to stance and veracity labels for the same data, which does not apply in our case as we do not have stance labels for all of the threads in the dataset (see section 3).
contrasting
train_9982
As the datasets contain a significant class imbalance, the majority baseline achieves fairly high accuracy scores.
due to the nature of this task, it is more important for a model to recognize all of the classes, especially false rumours, therefore the macro-averaged F-score is more important for performance evaluation.
contrasting
train_9983
Standard word embedding algorithms learn vector representations from large corpora of text documents in an unsupervised fashion.
the quality of word embeddings learned from these algorithms is affected by the size of training data sets.
contrasting
train_9984
While it may seem that this model is similar to SWESA, it is not so because in their model (Maas et al., 2011) first learn word embeddings and then fit a classifier in two different objective functions.
sWEsA is a single biconvex objective.
contrasting
train_9985
SWESA can be roughly mapped to the SDL by considering dictionary D of size k × V , where each column corresponds to a word embedding.
there are significant differences between SWESA and SDL.
contrasting
train_9986
This result is not surprising, given that the pre-trained RNTN and CNNS are i)initialized with pre-trained word embeddings and ii)trained on a data set with roughly 10 times more training data points than within SWESA.
note that on the imbalanced A-CHESS data set RNTN fails to perform any classification.
contrasting
train_9987
These representations are then used in supervised models for various classification tasks.
such tasks sometimes require very specific features that may not have been captured by the unsupervised objective.
contrasting
train_9988
The baseline method retrieved rather difficult texts, in which only 72.91% of the words were non-complex, below the desired minimum rate of 80%.
the WordList model and Learner+WordList model retrieved texts with 80.60% and 86.26% of non-complex words, respectively.
contrasting
train_9989
In the previously mentioned experiments we focused on more abstract features (POS tags), which we combine with punctuation marks to capture a higher-level representation of punctuation usage.
boW/word n-gram features are the ones that lead to the best results on the task, even though they have been disputed as being useful (brooke and Hirst, 2012), but potentially overfitting (brooke and Hirst, 2011).
contrasting
train_9990
(2012), L2 English learners are confident about their use of punctuation.
the high improvement for high-proficiency learners in both imbalanced and balanced settings suggests that learners keep their L1 punctuation style even when achieving high English proficiency.
contrasting
train_9991
By the end of this process, we identified structures that could be considered problematic in terms of writing for a language learner in each of the three levels.
the divergence score gave us only the magnitude of the divergence, but it did not account for where the divergence exactly lies, and, for our purposes, we deemed important to know the direction of the divergence, i.e., to know whether a structure is more prominent in the reception or the production.
contrasting
train_9992
for these single word entity pairs.
for the performance of entities containing multiple words, the Fisher vector is better than word embedding average in the perspective of F-score.
contrasting
train_9993
Each news article annotated the discourse frame, structure and relationship.
the discourse units are also smaller with a minimum of one sentence and a maximum of five sentences, and their research object is primarily at the word level.
contrasting
train_9994
Due to the increase in the number of channels and the need for more customized content for different audience segments, content authors need to produce content at a very rapid pace.
a lot of such content has already been produced and is available in enterprise content repositories.
contrasting
train_9995
Given an enterprise author's requirement in the form of a snippet describing the content she is trying to build, standard querying mechanisms can be utilized to fetch the right set of articles that can cater to her needs.
constructing an initial piece of 'article' that can cater to her needs would still be a non-trivial task.
contrasting
train_9996
The content style in an enterprise corpus is fairly consistent.
there could be multiple representations of the same information across the repository.
contrasting
train_9997
(2017) extend standard extractive summarization techniques to build article by combining different paragraphs in a corpus.
they do not account for coherence or information redundancies in the generated content.
contrasting
train_9998
(2011) aim to enhance question-answering by expanding a 'seed document' using web resources to construct the answer based on 'paragraph nuggets'.
both lexical and semantic redundancies are undesirable for a content author.
contrasting
train_9999
Taneva and Weikum 2013identify relevant text snippets ('gems') by using an integer linear program to maximize relevance of selected words, and prefer the selection of contiguous content pieces since this maximizes the coherence of the content resulting in a lot of content being chosen from the same source.
the information for a specific article can come from multiple sources and hence this will not achieve information coverage.
contrasting