id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_14700
In recent work, some research explore "deep" expressions such as discourse commitments or strict logic for representing the text.
these expressions suffer from the limitation of inference inconvenience or translation loss.
contrasting
train_14701
Through logic inference, some implicit knowledge behind the text can be mined.
it is not easy to translate the natural language text into formal logic expressions and the translation process inevitably suffer from great information loss.
contrasting
train_14702
With the result in Table 2, it is true that these pre-trained word embeddings have a good effect on our performance (we use word2vec 3 on Chinese Gigaword Corpus for word pre-training).
as shown in Table 2, compared to standard pre-training, the influence of heterogenous data is more evident.
contrasting
train_14703
More recently, Zhou and Xu (2015) proposed L-STM RNN approach for English Semantic Role Labeling, which shared similar idea with our model.
the features used and the network architecture were different from ours.
contrasting
train_14704
Due to the commonality in natural language, negation focus plays a critical role in deep understanding of context.
existing studies for negation focus identification major on supervised learning which is timeconsuming and expensive due to manual preparation of annotated corpus.
contrasting
train_14705
In this paper, we thus focus on graph-based ranking methods (Mihalcea and Tarau, 2004) which first build a word graph according to word co-occurrences within document, and then use random walk algorithms (e.g., PageRank) to measure word importance.
for negation focus identification, the graph-based methods may suffer from the following two problems: (a) the words in graphbased methods are strongly connected by cooccurrence rather than semantic content, which do not necessarily guarantee that they are relevant to the negation focus in context; and (b) identifying a negation focus may be affected by not only the relatedness of surrounding words but also its importance in current document which is not considered in standard random walk algorithms.
contrasting
train_14706
Vector space models of language like the ones presented in (Collobert et al., 2011b;Mikolov et al., 2013;Pennington et al., 2014) create good representations for the individual words of a language.
the words in a language can be combined into infinitely many distinct, wellformed phrases and sentences.
contrasting
train_14707
Interpretation of events-determining what the author claims did or did not happen-is important for many NLP applications, such as news article summarization or biomedical information extraction.
detecting events and assessing their factuality is challenging.
contrasting
train_14708
(2012), who advocated for representing factuality from the reader's perspective as a distribution of categories, but their annotation process requires manual normalization of the text.
we model factuality from the author's perspective with scalar values, and we have an endto-end crowdsourced annotation pipeline.
contrasting
train_14709
A prerequisite relation describes a basic relation among concepts in cognition, education and other areas.
as a semantic relation, it has not been well studied in computational linguistics.
contrasting
train_14710
adverse drug reactions, infectious diseases) in particular communities.
in order for a machine to understand and make inferences on these health conditions, the ability to recognise when laymen's terms refer to a particular medical concept (i.e.
contrasting
train_14711
We firstly observe that for the vSim baseline, excepting for word vector representation with vector size 50 learned using GloVe from the Twitter collection, word vector representations learned using either CBOW or GloVe are more effective than the one-hot representation.
the difference between the MRR-5 performance is not statistically significant (p > 0.05, paired t-test).
contrasting
train_14712
We argue that some types are relatively simple to answer, partly due to the limited vocabulary used, which explains why simple lexical matching methods can perform well.
some questions require understanding of higher level concepts such as those of the story and its characters, and/or require inference.
contrasting
train_14713
Then we attempted to account for them by searching the text for the sentence indicating that she had gone home and reducing the weight for all subsequent sentences.
since the improvements due to these rules were negligible, we did not include them in our final system.
contrasting
train_14714
(2013) demonstrate that the MC160 and MC500 have similar ratings for clarity and grammar, and that humans perform equally well on both.
in many cases MC500 appears to be designed in such a way to confuse lexical algorithms and encourage the use of more sophisticated techniques necessary to deal with phenomena such as elimination questions, negation, and common knowledge not explicitly written in the story.
contrasting
train_14715
Existing learning algorithms have primarily focused on building actionable meaning representations which can, for example, directly query a database (Liang et al., 2011;Kwiatkowski et al., 2013) or instruct a robotic agent (Chen, 2012;Artzi and Zettlemoyer, 2013b).
due to their end-to-end nature, such models must be relearned for each new target application and have only been used to parse restricted styles of text, such as questions and imperatives.
contrasting
train_14716
AMR meaning bank provides a large new corpus that, for the first time, enables us to study the problem of grammar induction for broad-coverage semantic parsing.
it also presents significant challenges for existing algorithms, including much longer sentences, more complex syntactic phenomena and increased use of noncompositional semantics, such as within-sentence coreference.
contrasting
train_14717
Given the partial derivations, our gradient computation is identical to Equation 2.
in contrast to Collins and Roark (2004) our data does not include gold derivations.
contrasting
train_14718
To minimise handcrafting, Stent and Molina (2009) proposed learning sentence planning rules directly from a corpus of utterances labelled with Rhetorical Structure Theory (RST) discourse relations (Mann and Thompson, 1988).
the required corpus labelling is expensive and additional handcrafting is still needed to map the sentence plan to a valid syntactic form.
contrasting
train_14719
Paliwal, 1997) have been shown to be effective for sequential problems (Graves et al., 2013a;Sundermeyer et al., 2014).
applying a bidirectional network directly in the SC-LSTM generator is not straightforward since the generation process is sequential in time.
contrasting
train_14720
In addition, the slot error rate (ERR) as described in Section 3.5 was computed as an auxiliary metric alongside the BLEU score.
for the experiments it is computed at the corpus level, by averaging slot errors over each of the top 5 realisations in the entire corpus.
contrasting
train_14721
For every task, the +Expectation method has performances that often seem to be higher than the simple baseline (both for the 50d case or the 100d case).
only some of these differences are significant.
contrasting
train_14722
The system described in (Hosseini et al., 2014) handles only addition and subtraction problems, and requires additional annotated data for verb categories.
our system does not require any additional annotations and can handle a more general category of problems.
contrasting
train_14723
It tries to map numbers from the problem text to predefined equation templates.
they implicitly assume that similar equation forms have been seen in the training data.
contrasting
train_14724
However, they implicitly assume that similar equation forms have been seen in the training data.
our system can perform competitively, even when it has never seen similar expressions in training.
contrasting
train_14725
We believe that the improvement was mainly due to the dependence of the system of (Roy et al., 2015) on lexical and neighborhood of quantity features.
features from quantity schemas help us generalize across problem types.
contrasting
train_14726
First, in this method, an already existing knowledge base is heuristically aligned to texts, and the alignment results are treated as labeled data.
the heuristic alignment can fail, resulting in wrong label problem.
contrasting
train_14727
These methods have been shown to be effective for relation extraction.
their performance depends strongly on the quality of the designed features.
contrasting
train_14728
successfully incorporated multiinstance learning into traditional Backpropagation (BP) and Radial Basis Function (RBF) networks and optimized these networks by minimizing a sum-of-squares error function.
to their method, we define the objective function based on the cross-entropy principle.
contrasting
train_14729
The idea is to capture the most significant features (with the highest values) in each feature map.
despite the widespread use of single max pooling, this approach is insufficient for relation extraction.
contrasting
train_14730
Here we considered a fact or tuple as linked if both of its entities were linked to Freebase, as partiallylinked if only one of its entities was linked, and as non-linked otherwise.
to previous work (Riedel et al., 2013;, we retain partially-linked and non-linked facts in our dataset.
contrasting
train_14731
Other words with embeddings similar to "driving" that appear on the dependency path between the mentions will similarly receive high weight for the ART label.
if the embedding is similar but is not on the dependency path, it will have 0 weight.
contrasting
train_14732
Roth and Woodsend (2014) considered features similar to ours for semantic role labeling.
in prior work both of above approaches are only able to utilize limited information, usually one property for each word.
contrasting
train_14733
As we can see, POS tags, grammatical relations, and WordNet hypernyms are also discrete (like words per se).
no prevailing embedding learning method exists for POS tags, say.
contrasting
train_14734
The Bayesian model of X&T is important in providing insight into how people might reason about samples of data that exemplify categories.
it relies on having complete, built-in knowledge about the taxonomic hierarchy, including both the detailed composition of categories and the values for between-object similarities, drawn from adult similarity judgments.
contrasting
train_14735
In the FAS model, all the features for a word are dependent: increasing the probability of any feature results in decreasing the probability of others.
this interaction is not always desirable, as many features regularly co-occur in the world.
contrasting
train_14736
conditions), the model, like children, generalizes to the lowest level category in the hierarchy that is consistent with the training items, roughly equally preferring items from that category or lower, with slight preference for the lower categories.
after seeing a single training example (the (a) Adult data: 3 subord.
contrasting
train_14737
(2013) present latent variable models for unsupervised learning of latent character types in movie plot summaries and in English novels, taking authorial style into account.
even the state-of-the-art NLP work rather describes personas of fictional characters by their role in the story -e.g., action hero, valley girl, best friend, villain etc.
contrasting
train_14738
This poses a challenge for fictional characters.
strong correlations have been found between the self-reported and perceived personality traits (Mehl et al., 2006).
contrasting
train_14739
We have shown that exploiting the links between lexical resources to leverage more accurate semantic information can be beneficial for this type of tasks, oriented to actions performed by the entity.
the human annotator agreement in our task stays high above the performance achieved.
contrasting
train_14740
Expectation-maximization algorithms, such as those implemented in GIZA++ pervade the field of unsupervised word alignment.
these algorithms have a problem of over-fitting, leading to "garbage collector effects," where rare words tend to be erroneously aligned to untranslated words.
contrasting
train_14741
They associate two models via the agreement constraint and show that agreement-based joint training improves alignment accuracy significantly.
enforcing agreement in joint training faces a major problem: the two models are restricted to one-to-one alignments (Liang et al., 2006).
contrasting
train_14742
propose a joint model to process word segmentation and informal word detection.
text normalization is split to another task ).
contrasting
train_14743
Their model is trained on a partially annotated microblog corpus.
our model can be trained on existing annotated corpora in standard text.
contrasting
train_14744
release a Chinese microblog corpus for word segmentation and informal word detection.
there are no microblog corpora annotated Chinese word segmentation, POS tags, and normalized sentences.
contrasting
train_14745
A Naive h function is straightforwardly defined as the annotation of each local rule part of the tree in a bottom-up fashion: 1 Base: When βˆƒa ∈ A such that h(a) = βŠ₯ we say that the annotation has failed.
the naive procedure fails in a large number of cases.
contrasting
train_14746
Given a grammar in CNF, we can prove that for a sentence of length n, the number of derivation steps for a shift reduce parser is 3n βˆ’ 1.
our tagset-preserving transformation also introduces rules of the form (b), which explains why the number of derivation steps may vary from 2n βˆ’ 1 to 3n βˆ’ 1.
contrasting
train_14747
This approach induces a compact feature representation by combining atomic features.
unlike traditional tensor models, it enables us to incorporate prior knowledge about desired feature interactions, eliminating invalid feature combinations.
contrasting
train_14748
These gains were obtained by a careful construction of features templates that combine standard dependency parsing features and typological features.
we propose an automated, tensor-based approach that can effectively capture the interaction between these features, yielding a richer representation for crosslingual transfer.
contrasting
train_14749
While these methods can automatically combine atomic features into a compact composite representation, they cannot take into account constraints on feature combination.
our method can capture features at different composition levels, and more generally can incorporate structural constraints based on prior knowledge.
contrasting
train_14750
In addition, we consider the semi-supervised transfer scenario, in which we assume 50 sentences in the target language are available with annotation.
we observe that random sentence selection of the supervised sample results in a big performance variance.
contrasting
train_14751
We believe that the reason is two-fold: (a) k-means finds a local maximum during clustering; (b) we do hard clustering instead of soft clustering.
we detected that the clustering algorithm gives a more diverse set of solutions, when the features are perturbed.
contrasting
train_14752
2013; Choi and McCallum, 2013; Ballesteros and Bohnet, 2014).
these methods are based on discrete features and suffer from the problems of data sparsity and feature engineering (Chen and Manning, 2014).
contrasting
train_14753
Le and Zuidema (2014) proposed a generative re-ranking model with Inside-Outside Recursive Neural Network (IORNN), which can process trees both bottom-up and top-down.
iORNN works in generative way and just estimates the probability of a given tree, so iORNN cannot fully utilize the incorrect trees in k-best candidate results.
contrasting
train_14754
This is the case of many application like message dictation for example.
in the case of service at hand, an initial concept must be detected (the action), therefore, FAIL RAW improves the performance.
contrasting
train_14755
Moreover, continuous representation of words in neural network LMs can also support robust modeling (Bengio et al., 2003;Mikolov et al., 2010).
previous works are focused on maximizing performance in the same domain as that of the training data.
contrasting
train_14756
In other words, it is uncertain that these technologies robustly support out-of domain tasks.
latent words LMs (LWLMs) (Deschacht et al., 2012) are clearly effective for outof domain tasks.
contrasting
train_14757
LW is comparable to MKN and HPY, and inferior to RNN in terms of PPL.
in test sets (out-of domain tasks), PPL improved with the increase in the number of layers in LW.
contrasting
train_14758
We use the following features in our implementation of this model.
any relevant ASR and SMT feature may be readily added to this model.
contrasting
train_14759
As this problem is NP-hard, pruning low-weight concepts is required for the ILP solver to find optimal solutions efficiently Li et al., 2013).
reducing the number of concepts in the model has two undesirable consequences.
contrasting
train_14760
fluency, should be conducted using manual evaluation.
conducting formal human evaluation is somewhat problematic.
contrasting
train_14761
Initially, manual evaluation was carried out, where human judges were tasked to assess the quality of automatically generated summaries.
in an effort to make evaluation more scaleable, the automatic ROUGE 1 measure (Lin, 2004b) was introduced in DUC-2004.
contrasting
train_14762
In this variant, word embeddings are used, as we are proposing in this paper, to map text content within generated summaries to SCUs.
the SCUs still need to be manually identified, limiting this variant's scalability and applicability.
contrasting
train_14763
The MDL principle is widely useful in compression techniques of non-textual data, such as summarization of query results for OLAP applications.
(Lakshmanan et al., 2002;Bu et al., 2005) only a few works on text summarization using MDL can be found in the literature.
contrasting
train_14764
It is noteworthy that, in addition to the greedy approach, we also evaluated the global optimization with maximizing coverage and minimizing redundancy using Linear Programming (LP).
experimental results did not provide any improvement over the greedy approach.
contrasting
train_14765
Our work is based on the bipartite entity graph introduced by Guinaudeau and Strube (2013).
in their graph one set of nodes corresponds to entities whereas in our graph it corresponds to topics.
contrasting
train_14766
We use the DUC 2002 dataset to compare our results with state-of-the-art techniques.
to the PLOS Medicine data the DUC 2002 dataset contains very small articles.
contrasting
train_14767
However the difference between the results of Tgraph and Egraph are not significant.
to the entity graph based system, the coherence measure in our system is calculated by using a topic-based weighted projection graph, which is denser and hence more informative.
contrasting
train_14768
The results are comparable with the entity graph.
the entity graph is less informative and very sparse as compared to the topical graph.
contrasting
train_14769
Instead of performing SVD, we can also take s i ∈ R B as our sentence representation, which makes our method resemble the bigram coverage-based summarization approach.
this makes s i a very sparse vector.
contrasting
train_14770
Colmenares et.al construct a 1.3 million financial news headline dataset written in English for headline generation (Colmenares et al., 2015).
the data set is not publicly available.
contrasting
train_14771
Recently, recurrent neural network (RNN) have shown powerful abilities on speech recognition (Graves et al., 2013), machine translation and automatic dialog response (Shang et al., 2015).
there is rare research on the automatic text summarization by using deep models.
contrasting
train_14772
The first and second tier cities are facing growth difficulties.
o2o market in the third and fourth tier cities contains opportunities.
contrasting
train_14773
To some extent, this hope was validated through a number of works at the time, mostly involving machine translation applications, and constraining in more or less explicit ways the specification of r (van Noord, 1990).
for the non-statistical approaches to parsing then strongly dominant, robustness was an issue: a parser had to either accept or reject a given input x, with no intermediary options, and in order to be able to parse actual utterances, with all their empirical diversity, parsers had to be rather tolerant.
contrasting
train_14774
Modern QA systems rely on an independent component to pre-select candidate answer sentences, which utilizes various signals such as lexical matching and user behaviors.
the candidate sentences Table 5: Evaluation of answer triggering on the WIKIQA dataset.
contrasting
train_14775
Most work leverages multiple sources of information, such as search query history, Twitter feeds, Facebook likes, social network links, and user profiles.
in many situations, little of this information is available.
contrasting
train_14776
We term this procedure as On-Demand Augmentation (ODA), because the search can be performed during test time in an on-demand manner.
the previous approaches of adding edges or embeddings to the KB (Gardner et al., 2013), and vector space random walk PRA (Gardner et al., 2014) are batch procedures.
contrasting
train_14777
Using surface level relations and noun phrases for extracting meaningful relational facts is not a new idea (Hearst, 1992), (Brin, 1999), (Etzioni et al., 2004).
none of them make use of Knowledge Bases for improving information extraction.
contrasting
train_14778
It is then reasonable to expect that a state-of-the-art formal semantics provides an accurate computational basis of natural language inferences.
there are still obstacles in the way of achieving this goal.
contrasting
train_14779
Thus, E2 and E4 are relatively difficult to be recognized as events by themselves.
event coreference E1-E2, which is supported primarily by E2's participants Barclays and Zaragozano shared with E1, helps determine that E2 is an event.
contrasting
train_14780
That is, one can train a structured learning model to globally capture the interactions between two relevant tasks via a certain kind of structure, while making predictions specifically for these respective tasks.
no prior work has studied the interactions between event trigger identification and event coreference resolution.
contrasting
train_14781
If one uses this approach, a beam state may represent a partial assignment of an event trigger.
event coreference can be explored only from complete assignments of an event trigger.
contrasting
train_14782
It is possible for the model to predict an invalid encoded sequence that does not correspond to any word in the original vocabulary.
in our experiments, we did not observe any such sequences in the decoding of the test set.
contrasting
train_14783
the whole neural network (not just the output layer like the NNJM) for each noise sample and thus noise computation is more expensive.
for different epochs, we resampled the negative example for each positive example, so the BNNJM can make use of different negative examples.
contrasting
train_14784
For the number of top configurations k used to initialize each following stage, we know the larger k is, the better results in the next stage since Bayesian Optimization relies on good initial knowledge to fit good regression models (Feurer et al., 2015).
larger k value also leads to high computation cost at the next stage, since these initial settings will have to be evaluated first.
contrasting
train_14785
Notice that the weakly supervised classifier trained with 68M words obtains 68% accuracy on the AOC test set and 48% on the FB test set (row 1), which is not much higher.
considering this classifier is trained without any human labeled dialect data, the performance is expected and can be improved with better training data and models.
contrasting
train_14786
Precisions increase from the weakly supervised to the strongly supervised to the semi-supervised classifier, and the combined classifier generally outperforms all three classifiers, except for the Gulf dialect.
considering the smaller percentage of the Gulf dialect, we still observe significant improvement overall.
contrasting
train_14787
Recognizing claims exhibits similar behavior.
recognizing non-argumentative text performs better in the opposite direction.
contrasting
train_14788
These papers draw similar conclusions, showing that the the distribution of geotagged tweets over the US population is not random, and that higher usage is correlated with urban areas, high income, more ethnic minorities, and more young people.
this prior work did not consider the biases introduced by relying on geotagged messages, nor the consequences for geo-linguistic analysis.
contrasting
train_14789
As described in Section 3.2, the distant supervision labels were based on a linear combination of three heuristics that achieved at best an RMSE of 5.1 sentences.
with self-training, we can exploit the noisy heuristic labels by using only those labels that agree with the seed-trained model, thus reducing the amount of noise.
contrasting
train_14790
In this work, we also consider sentences as units of suggestion.
we observe that sentences might miss the context, or refer to something mentioned in the previous sentence.
contrasting
train_14791
And reposting messages, namely reposts, can provide valuable context information to the previous posts including their background, development, public opinions and so on.
a popular post usually attracts a large number of reposts.
contrasting
train_14792
A simple way to detect leaders on repost tree is to directly apply a binary classifier like SVM on each individual message.
these models assume reposts are independent without effectively leveraging abundant context along the repost tree paths, such as the reposting relations among different reposts on a path.
contrasting
train_14793
Therefore, this makes it possible to include true leaders misclassified as followers by leader detection module into summary.
allowing all messages to participate in ranking also increases the risk of selecting real followers.
contrasting
train_14794
Traditional approaches to zero anaphora resolution are based on manually created heuristic rules (Kameyama, 1986;Walker et al., 1994;Okumura and Tamura, 1996;Nakaiwa and Shirai, 1996), which are mainly motivated by the rules and preferences introduced in Centering Theory (Grosz et al., 1995).
the research trend of zero anaphora resolution has shifted from such rule-based approaches to machine learningbased approaches because in machine learning we can easily integrate many different types of information, such as morpho-syntactic, semantic and discourse-related information.
contrasting
train_14795
In these methods, the semantic compatibility between the contexts surrounding an anaphor and its antecedent (e.g., the compatibility of verbs kidnap and release given some arguments) was automatically extracted from raw texts in an unsupervised manner and used as features in a machine learning-based approach.
because the automatically acquired semantic compatibility is not always true or applicable in the context of any pair of an anaphor and its antecedent, the effectiveness of the compatibility features might be weakened.
contrasting
train_14796
Another important point is that in the NAIST Text Corpus, if the antecedent of a zero anaphor is not explicitly written in the corpus, it is simply annotated as 'exophoric', and the subject sharing relations between two predicates whose subject was annotated as exophoric cannot be captured.
in our cleaning procedure, the annotators additionally annotated such 'exophoric' subject sharing relations to take into account all subject sharing relations in the corpus.
contrasting
train_14797
When combining more than one subject sharing recognizer in our method, we construct the SSPN using the subject sharing relations recognized by at least one of those recognizers for transitive subject propagation.
in the baseline method, the SSPN was not constructed and zero anaphoric relations were identified using only the outputs of our subject detector and one of those recognizers.
contrasting
train_14798
The results are shown in Table 5 and demonstrate that the performance of all the methods without Step 5 does not reach that of the baseline method in F-score.
they retain high precision that ranges from 60% to 75%, preserving more than 10% of the recall on the DEP, DEP+ADJ, DEP+PNP and DEP+ADJ+PNP methods.
contrasting
train_14799
Given labeled data, supervised learning can be applied to obtain sentiment weights for each word.
the effectiveness of supervised sentiment analysis depends on having training data in the same domain as the target, and this is not always possible.
contrasting