id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_9400
For investors such as venture capitals, information content should reflect a firm's intrinsic value, or potential of future growth.
measuring the information content of text can be challenging due to uncertainties and subjectivity.
contrasting
train_9401
A typical way of modeling a sentence is to treat it as a sequence and input the sequence to a long short-term memory (LSTM;(Hochreiter and Schmidhuber, 1997) model, which is capable of learning semantic features automatically.
information may present different impact to individual firms and therefore, we need a way to represent information conveyed in a news sentence depending on a specific firm.
contrasting
train_9402
In the previous example, a bidirectional model would output only one representation for the sentence no matter what the target firm is.
the proposed tree transformation algorithm outputs different binarized trees when different targets are given.
contrasting
train_9403
Over the last few decades, technology has provided us many powerful tools that have completely changed our daily routines.
one crucial area where technology has yet to have the significant impact suggested by its true promise is in education.
contrasting
train_9404
As can be seen, the student fails to address primary MMP 2 from the reference answer.
the student successfully understood the concepts expressed in MMPs 1 and 3.
contrasting
train_9405
Although much of the work done by c-rater has been automated in the past years (Sukkarieh and Stoyanchev, 2009), it still requires an appropriate set of responses that have already been holistically scored by trained raters.
1 our approach is fully automated and can be used in a dynamic setting to recognize the focused relationships between a specific reference answer proposition and the student's response.
contrasting
train_9406
The UMASS system represents the documents as TF-IDF weighted vectors.
in order to compare documents in terms of their relations we need a relation-oriented representation.
contrasting
train_9407
(3) A much larger synthetic dataset (D3) of 106, 000 documents from Yahoo News is used in order to evaluate the scalability of our method.
this dataset is not annotated and thus is not used for evaluating the accuracy.
contrasting
train_9408
The improvement in Det min score is statistically significant for the dataset (D1) at the p < 0.05 level using a paired t-test.
for dataset (D2) all methods perform close to the best method (LSH-RelEntFSD) and so the results are not statistically significant.
contrasting
train_9409
This value can be estimated empirically by a pilot study before the item is used in an actual test.
reliable estimates of item difficulty require a substantial amount of test taker responses.
contrasting
train_9410
Therefore we initially hypothesized that the text complexity of a common listening passage would be a strong predictor of the item difficulty.
the empirical results did not support this.
contrasting
train_9411
As discussed earlier, the former are grouped into sets and are not sequenced according to difficulty of individual items.
even for simpler items the item sequence was not a strong predictor of item difficulty.
contrasting
train_9412
We also found that the most highly ranked features were related to item vocabulary, such as lexical frequency of the words as well as the level of concreteness.
the system based on vocabulary features performed worse than the system based on all features.
contrasting
train_9413
The appropriate use of collocations is a challenge for second language acquisition.
high quality and easily accessible Chinese collocation resources are not available for both teachers and students.
contrasting
train_9414
Thus a better strategy should incorporate richer annotation of corpus and take more consideration of Chinese grammatical features.
to definitions of collocation in the previous studies, We introduce function words into our collocation study, and identify four characteristics of collocations: (1) can be word combinations with more than two words; (2) can contain both content words and function words; (3) collocated words can be either adjacent or non-adjacent; (4) collocated words must hold syntactic or semantic relations.
contrasting
train_9415
We count the collocations of the 19 entries in these resources and find OCCA is much higher than D1 and D2 in collocation quantities.
by analysing the collocation data, we also find that OCCA does not contain some collocations in D1 and D2, e.g. "
contrasting
train_9416
外交 (waijiao/diplomatic) 问题 (wenti/problems)", and "农村 (nongcun/village) 发展 (fazhan/develop)", mainly because of the domain and size limit of the CTC corpus.
the WIKI database could serve as a good supplement because it covers nearly 63% of the 278 collocations that cannot be retrieved in CTC.
contrasting
train_9417
To offer a better solution to event detection without the above limitations, we propose to use a novel text stream representation: Burst Information Networks (BINets) (Ge et al., 2016a;Ge et al., 2016b).
to the keyword graph which is based on word co-occurrence, a BINet is constructed based on burst co-occurrence.
contrasting
train_9418
So it is easy to enlarge the size of the corpora by mixing the data of the same emotion category from two corpora.
corpus fusion for emotion classification is challenging due to the following two factors: First, the emotion taxonomies are often different between two emotion corpora because of the lack of an accepted standard.
contrasting
train_9419
The other nodes correspond to ADUs, and the edges to SUPPORT or ATTACK relations, where the latter distinguish between REBUT (challenging the validity of the assertion in the ADU) and UNDERCUT (challenging not the validity of individual ADUs, but the supposition of a SUPPORT relation between two).
to an RST analysis, there is no constraint that only adjacent segments may be conjoined by a relation.
contrasting
train_9420
by using around thousands of dependencies.
if more than ten thousands of dependencies are added to the vocabulary, the performance tends to slightly drop.
contrasting
train_9421
Moreover, since consumption level is an important factor of economic status (Stutzer, 2004), effective prediction of consumption level will facilitate social economic researches on social media.
unlike the gender attribute that is displayed on one's page, or political orientation that is unequivocally stated in one's tweets, consumption behavior related attributes are hard to acquire automatically from microblogging service.
contrasting
train_9422
Intuitively, it is straightforward to generate two rankings of users, either by average spending or by topic preference.
it is noted that ρ is sensitive to small value differences of both measures, and it will be difficult to obtain robust correlation values in this case.
contrasting
train_9423
Thus, it is highly desirable to rapidly generate meaningful labels for a topic word distribution while assign labels for new emerging topics as correctly as possible.
to the best of our knowledge, no existing method has been proposed to satisfy the demand, except that the simplest method which uses top-n words in the distribution to interpret a topic.
contrasting
train_9424
In most existing research effects on statistical topic modeling, people generally either select top words in the distribution as primitive labels (Blei et al., 2003;Ramage et al., 2009;Ramage et al., 2011), or generate more meaningful labels manually in a subjective manner (Mei et al., 2006;Mei and Zhai, 2005).
extracting top terms is not very useful to interpret the coherent meaning of a topic (Mei et al., 2007).
contrasting
train_9425
Since the time for each iteration grows with the number of documents, the total time for BLLDA grows fast.
oLLDA only processes the new coming document and update parameters.
contrasting
train_9426
For seq2BF without content introducing, we obtain low scores, showing that artificially splitting a sequence into two parts itself is not a fancy way of modeling natural language sentences.
given a keyword predicted by PMI statistics, the backward and forward sequence generation can significantly improve the dialogue system in comparison with pure seq2seq.
contrasting
train_9427
The coupling formulation is inspired by the Multimodal RNN that generates textual description from visual data .
unlike Karpathy's approach, we do not allow the feature that is generated by the CNN component to diminish between distant timesteps (i.e.
contrasting
train_9428
Our framework produces comparative results in metadata extraction tasks.
it significantly outperforms state-of-the-art systems in structural information extraction tasks.
contrasting
train_9429
The results of our initial analyses indicate that the flexibility offered by our tool is important for gaining important insights into the data: higher level analyses and visualisation can reveal patterns not visible in a simpler point visualisation (as in the case of theštošta variable).
point visualisation can serve as a good tool for manual checkups of the reliability of both the extracted data and higherlevel analyses.
contrasting
train_9430
Some patterns of such relations would certainly be relevant for an analysis of argumentation strategies, like the rebuttal of a common ground that seems to counter the author's stance.
similar to the low number of attack relations in persuasive essays (Stab and Gurevych, 2014), we found few insightful patterns of this kind in editorials.
contrasting
train_9431
The first unit could be interpreted as an implicit assumption about the reviews in the second unit, say, that the review is corrupt or hard to understand.
it could also simply be seen as an interjection not belonging to any argument.
contrasting
train_9432
Consistency of the frame lexicon.
so far, previous work has not investigated the overall correctness and consistency of the frame lexicon.
contrasting
train_9433
An English example for the verb turn is to turn something for TURN.01 and to turn into something for TURN.02.
we ask curators to create this form for TL verbs 2 .
contrasting
train_9434
And if a larger filter is used, the convolution can detector more features, and the performance may be improved, too.
the networks will take up more storage space, and consume more time.
contrasting
train_9435
We note that the dominance of the MFS in the training instances now drops to 30-40%.
this also results in very impressive performances on the LFS instances, between 33.33% and 47.62%.
contrasting
train_9436
Table 2 shows that, indeed, monolingual semantic evaluation performance is consistently positively correlated with (semantic) language similarity for BBA.
for CCA, correlation is positive in eight cases and negative in six cases; moreover, coefficients are significant in only two cases.
contrasting
train_9437
This idea also underlies very well-known lexical semantic resources such as the paraphrase database (PPDB) (Bannard and Callison-Burch, 2005;Ganitkevitch et al., 2013); see also Eger and Sejane (2010).
we directly use bilingual embeddings for this similarity measurement by jointly embedding p and , which are arguably best suited for this task.
contrasting
train_9438
This has the disadvantage that translation pairs need to be known, which typically requires large amounts of parallel text.
bilingual word embeddings, which form the basis of our experiments, can be generated from as few as ten translation pairs, as demonstrated in Zhang et al.
contrasting
train_9439
The performance of word embeddings can be drastically affected by their parameters (Levy et al., 2015;Lai et al., 2015), which prompts parameter searching for different tasks.
accuracy of solving word analogies also varies immensely for different linguistic relations (Gladkova et al., 2016).
contrasting
train_9440
The results of GloVe and Skip-Gram improve with LRCos as compared to 3CosAdd, but the simple average 3CosAvg works even slightly better for them.
sVD gets an over 15% boost from LRCos, but not from 3CosAvg.
contrasting
train_9441
Unfortunately, this means that word analogies fail to provide sufficient "context" to words: ideally, king:queen :: man:woman and king:kings :: queen:queens should profile sufficiently different aspects of the king vector to avoid the nearest-neighbor trap.
it does not seem to work this way.
contrasting
train_9442
LRCos offers a significant boost in accuracy for detecting analogical relations over the most widely-used 3CosAdd method, including derivational relations where the latter does not perform well.
lRCos is by no means perfect, and there is room for further improvement, especially with respect to lexicographic relations.
contrasting
train_9443
We also experimented using a bi-LSTM.
we found GRUs to yield comparatively better validation data performance on semtags.
contrasting
train_9444
The convolutions used in this manner cover a few neighbouring letters at a time, as well as the entire character vector dimension (d c ).
to dos Santos and Zadrozny (2014), we treat a word analogously to an image.
contrasting
train_9445
The residual bypass effectively helps improve the performance of the basic CNN.
the tagging accuracy of the CNN falls below baselines.
contrasting
train_9446
(2016) system on semtags, we are substantially outperformed on UD 1.2 and 1.3 in this setup.
adding an auxiliary loss based on our semtags markedly increases performance on POS tagging.
contrasting
train_9447
Numerous studies address FrameNet's lack of lexical coverage (Pennacchiotti et al., 2008;Das and Smith, 2012;Pavlick et al., 2015).
little work has been done on extending frame relations except by Ovchinnikova et al.
contrasting
train_9448
We chose OPTED because is a public and free-access dictionary, based on Webster's Unabridged Dictionary, and an important and recognized dictionary.
we chose DRAE because it is the most authoritative dictionary of the Spanish language.
contrasting
train_9449
For example, a person or a bird is animate because they move or communicate under their own power.
a chair or a book is inanimate because they do not perform any kind of independent action.
contrasting
train_9450
For example, horse is normally animate, but a dead horse is obviously inanimate.
tree is an inanimate word but a talking tree is definitely an animate thing.
contrasting
train_9451
Our word animacy model achieved an F 1 of 0.98, whereas the prior state of the art achieved F 1 of 0.99 for marking inanimacy.
for marking animacy our model achieved F 1 of 0.90 where the state of the art achieved F 1 of 0.93.
contrasting
train_9452
(2017a) present the ZP-centered-LSTM architecture that learns to encode zero pronouns by their text words.
it could bring with some defects: their model regards all the words in the sentence equally, thus fails in capturing informative parts of the sentence.
contrasting
train_9453
On the base of Zhao and Ng (2007), Chen and Ng (2013) further investigate their model, introducing two extensions to the resolver, namely, novel features and zero pronoun links.
these work deeply rely on annotation dataset.
contrasting
train_9454
All entities of Our 4 can be identified from the context of this dialogue so it is annotated with the known entities Jack and Judy.
only one of you 7 can be identified in this context so it is annotated with the known entity Ross and also OTHER, implying that it refers to some other entity that can be identified in a separate dialogue.
contrasting
train_9455
The entity-mention models try constructing representations of discourse entities, and associating different mentions with the entity representations (Luo et al., 2004).
none of these model types consider more than two mentions together at the low level.
contrasting
train_9456
In most cases, we have dozens of mentions in an article, which is not an issue.
some long articles have hundreds of mentions, so generating all triads is unpractical and unnecessary.
contrasting
train_9457
For testing, we consider the mentions with distances up to 40.
this does not mean the long-distance coreference can never be detected.
contrasting
train_9458
In principle, we would like distance metrics to have 0 as the minimum, which can be achieved by subtracting 1.
for the purpose of clustering, it is not necessary.
contrasting
train_9459
Our system performs by far the best with the MUC evaluation metric, and is also the best with B 3 metric, measured with F1 score.
the performance is quite low from the CEAF φ4 metric.
contrasting
train_9460
From a text, system cr1 identifies two entities {the American administration, it 1 , it 2 , it 3 } and {they 1 , they 2 , them, their}.
it misses the fact that the two are actually the same entity.
contrasting
train_9461
Our intuition is cr1 does a better job, because it resolves much more coreference relations.
cEAF φ4 will score cr2 higher.
contrasting
train_9462
Paradigms can extend the attested morphological forms from few but high frequency words to low frequency words, likely the majority, for which there is little data.
high quality paradigms may prove effective at detecting spurious morphological relations between words that have plagued many previous models.
contrasting
train_9463
This is due to fact that the transformation rules can well capture the morphology of English and thus significantly increase the true positive segmentations.
the transformation rules increase the over-segmentation problem for Turkish and Finnish as indicated by the decreased precisions.
contrasting
train_9464
Indigenous languages of the American continent are highly diverse.
they have received little attention from the technological perspective.
contrasting
train_9465
Regarding parallel corpora, the bible is a common source that contains translations to many of these languages, although it is not always straightforward to extract the content in a digital format.
there are some projects that offer parallel content through a web search interface, e.g., Axolotl (Spanish-Nahuatl parallel corpus) that was mainly gathered from non-digital sources (books from several domains), the documents have dialectal, diachronic and orthographic variation (Gutierrez-Vasques et al., 2016).
contrasting
train_9466
In NLP, lemmatization and stemming methods are used to reduce the morphological variation by converting words forms to a standard form, i.e., a lemma or a stem.
most of these technologies are focused in a reduced set of languages.
contrasting
train_9467
One way to overcome this is to include linguistic information, e.g., morphology, syntax.
this kind of knowledge or linguistic tools is not always available, especially for low-resource languages.
contrasting
train_9468
Keyword heuristics have also been used to overcome language and domain barriers using bilingual dictionaries (Szarvas, 2008;Tran et al., 2013).
a weak bilingual dictionary could result in low coverage with this method.
contrasting
train_9469
This causes the model to be able to learn more unique contexts for SF types, thereby increasing recall.
the annotated dataset in the target language ( A ), being made from the output of the NN model on a separate dataset, will mostly help the model to do better recognition of false positives, thereby improving precision.
contrasting
train_9470
The additional English data apparently allowed the NN model to find a strong correlation between the crime violence class and the terrorism class, which is consistent with our intuition.
the NN model fine-tuned on the Tigrinya annotations apparently found the crime violence and terrorism classes tend to occur alone (last column of Figure 3c).
contrasting
train_9471
(2015) constructed a subject-shared predicate network with an accurate recognizer of subject-sharing relations and deterministically propagated the predicted subjects to the other predicates in the graph.
this method is applied only to subject sharing, so it cannot take into account the relationships among multiple argument labels.
contrasting
train_9472
(2017) used Grid RNN to incorporate intermediate representations of the prediction for one predicate generated by an RNN layer into the inputs of the RNN layer for another predicate.
in this model, since the information of multiple predicates also propagates through the RNNs, the integration of the prediction information is influenced by word order and distance, which is not necessarily important for aspects of syntactic and semantic relations.
contrasting
train_9473
The model is, however, not applicable to tasks with structured output such as trees or graphs.
to POS tagging or NER where we try to detect annotation errors in the predicted labels for individual tokens, when looking for errors in parse trees we have to deal with directed, labelled relations between nodes, and changing the relation or label between two nodes in the tree usually requires the adjustment of other attachment and labelling decisions in the same tree.
contrasting
train_9474
Generating trees Once we are done with error correction, we want to output the trees.
what we get from the variational model are local decisions for individual labels and edges, and it is not straight-forward to generate connected trees based on these decisions.
contrasting
train_9475
Please note that we did not optimise the parsers on the data and thus the comparison should be taken with a grain of salt.
it is interesting to see that there is not one best parser but that the different parsers (excluding the 'vintage' Malt system) all yield results in the same range and the best performing parser varies depending on the dataset.
contrasting
train_9476
The authors report a high precision for automatic error correction.
their method only addresses attachment errors but does not handle label errors.
contrasting
train_9477
Ideally, our model assigns a high probability to this span.
the span [0, 3] is not a valid constituent span and our model labels it with 0.
contrasting
train_9478
The use of RNN for the evaluation of grammaticality can already be found in Tomida and Utsumi (2013) (a reply to a non-NN learning model in Pearl and Sprouse 2013).
their RNN (a Jordan RNN, Jordan 1997) worked on abstract data (it was trained on preassigned constituent labels and had to generate other label sequences), a choice that in our opinion requires too many underlying assumptions.
contrasting
train_9479
The fact that the ungrammatical cases were graded, with pronouns next best after gaps (see Table 3) shows that the network wasn't just using the simple rule "If it starts with Wh*, pick the version with fewer words, if not, don't".
the overwhelming effect of processing factors like the level of embedding (Figure 2), and the fact that the apparent success of the NN in the island task in not based on the island extraction effect itself cast doubts on the idea that the NN is using an abstract dimension of 'grammaticality'.
contrasting
train_9480
As is well-known to anyone who has practiced a musical instrument, pronounced tongue-twisters or read centrally-embedded sentences, processing difficulties improve a lot with practice.
multiple repetitions of Who did John see Mary?
contrasting
train_9481
The states that improve the most over baseline, with 15% improvement or more using only textual features are Oregon, Oklahoma, Tennessee, D.C., South Carolina, Louisiana (lower), Georgia (lower), and Alabama (lower).
text is least predictive in Connecticut, Wyoming, Idaho, New Jersey, Utah (upper), New Hampshire (upper), North Dakota (upper), all underperforming the baseline.
contrasting
train_9482
Because our goal is to investigate whether a model can learn to search through a document, it is important that a non-negligible fraction of the questions require navigation through the document.
in Wikipedia each document starts with a preface that summarizes the document, and thus often contains the answer.
contrasting
train_9483
We observe, compared to TRIV-IAQA, that the first occurrence of an answer is much more spread out across the document, and that the median increases from 3 to 14, which will require more navigation from the agent.
even in TRIVIAQA-NOP answers tend to appear at the beginning of the document, because document content is usually organized by importance.
contrasting
train_9484
Another thrust has focused on skimming text in a sequential manner (Yu et al., 2017), or designing recurrent architectures that can consume text quickly (Bradbury et al., 2017;Seo et al., 2018;Campos et al., 2018;Yu et al., 2018).
to the best of our knowledge no work has previously applied these methods to long documents such as Wikipedia pages.
contrasting
train_9485
Co-occurrence probabilities of such events play important roles for predicting Conjunction.
there are few connectives can be used for pursuing the co-occurred events or they are extraordinarily general.
contrasting
train_9486
A connective like "than" is effective to signal a pair of Contrastive events.
due to the limitation of literal expression, in general, such event mentions aren't directly connected by "than", but instead the concrete elements in the events are.
contrasting
train_9487
Note that when α = 0, the regularizer is disabled and the output probability distribution is very noisy and neighboring values have large variance.
when the regularizer is properly enabled (α = 0.16), observe how the output probability distribution is much smoother and neighboring probability values are more similar.
contrasting
train_9488
On short documents NEO is easily misled due to lack of effective sample size.
observe that as the length of the document increases NEO's error reduces significantly (note Table 3 that for 2000 word documents on COHA-FICTION the mean absolute error is now 24.80).
contrasting
train_9489
We can see that an equation system includes one or more equations and it is a solution for a specific math word problems.
an equation template can correspond to several math problems.
contrasting
train_9490
The output equation structure is correct by the basic model.
the numbers are aligned wrongly.
contrasting
train_9491
(2017) apply a standard seq2seq model to generate equations under the constraint of one variable.
the model is prone to generate numbers that do not exist in the problem or in wrong positions.
contrasting
train_9492
This algorithm applies a personalized complex word identification (CWI) model in two steps in the LS pipeline: • CWI for detection: Most current approaches deploy a user-independent CWI model as the first step in their pipeline to detect words that should be simplified (Section 2.1).
we train a personalized CWI model for this purpose, such that the choice of target words can vary from one user to another.
contrasting
train_9493
This configuration also yielded the highest precision (18.01%), outperforming the baseline by almost 10%.
the highest accuracy (50.91%), similar to the first experiment, was obtained by using automatic CWI for ranking only (detect=nil, rank=auto).
contrasting
train_9494
Adding POS information benefits the SimVerb and SimLex verb performance, which can be attributed to the coarse disambiguation of verb-noun and verb-adjective homonyms.
the type.POS targets show a considerable performance drop on SimVerb and SimLex verbs.
contrasting
train_9495
We have introduced word class suggestion as an evaluation benchmark for word embeddings.
the WCS output might be used in vocabulary-based application scenarios, e.g.
contrasting
train_9496
Paetzold and Specia (2016a) claim that their approach outputs customized simplifications depending on the user's profile, and evolves as users provide feedback on the output produced.
they provide no details of the approach they use to do so, nor do they present any results showcasing its effectiveness.
contrasting
train_9497
Our ETD framework admits any means of identifying the keyphrases, so any keyphrase generation algorithm can be employed (e.g., TextRank).
since our task is slightly different in that we want to generate keyphrases for overall trend detection of an area (as opposed to the more typical characterization of a single publication), we need to introduce several refinements.
contrasting
train_9498
We can see that in this example the premise entails the hypothesis, and in order to correctly identify this relation, one has to know that the word kettle entails the word pot.
if we train a neural network model on a set of labeled sentence pairs, and if the training dataset does not contain the word pair kettle and pot anywhere, it would be hard for the learned model to know that kettle entails pot and subsequently predict the relation between the premise and the hypothesis to be entailment.
contrasting
train_9499
However, if we train a neural network model on a set of labeled sentence pairs, and if the training dataset does not contain the word pair kettle and pot anywhere, it would be hard for the learned model to know that kettle entails pot and subsequently predict the relation between the premise and the hypothesis to be entailment.
from WordNet we can easily find out that pot is a direct hypernym of kettle and therefore kettle should entail pot.
contrasting