id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_21000
That work was also not capable of learning from supervised annotations in a downstream task.
our approach uses document-level sentiment annotations to learn about the role of discourse connectors in sentence-level subjectivity.
contrasting
train_21001
Discourse coherence has always been used as a key metric in human scoring rubrics for various assessments of spoken language.
very little research has been done to assess a speaker's coherence in automated speech scoring systems.
contrasting
train_21002
These features were either extracted directly from the speech signal or were based on the output of an automatic speech recognition system (with a word error rate of around 28% 1 1 Both the training and evaluation sets used to develop the speech recognizer consist of similar spoken responses drawn from the same assessment.
there is no response overlap between these sets and the corpus used for discourse coherence annotation in this study.
contrasting
train_21003
The mismatch in the level of abstraction between the natural language representation and the regular expression representation make this a novel and challenging problem.
a given regular expression can be written in many semantically equivalent forms, and we exploit this flexibility to facilitate translation by finding a form which more directly corresponds to the natural language.
contrasting
train_21004
1 Regular expressions (regexps) have proven themselves to be an extremely powerful and versatile formalism that has made its way into everything from spreadsheets to databases.
despite their usefulness and wide availability, they are still considered a dark art that even many programmers do not fully understand (Friedl, 2006).
contrasting
train_21005
utilize unification to find possible ways to decompose the logical form.
they perform only syntactic unification.
contrasting
train_21006
The DFA representation of a regular expression may be exponentially larger than the the original regular expression.
past work has shown that most regular expressions do not exhibit this exponential behavior (Tabakov and Vardi, 2005;Moreira and Reis, 2012), and the conversion process is renowned for its good performance in practice (Moreira and Reis, 2012).
contrasting
train_21007
We define the features in our model over individual parse productions, admitting the use of dynamic programming to efficiently calculate the unconditioned expected counts.
when we condition on generating the correct regular expression, as in the first term in (2), the calculation no longer factorizes, rendering exact algorithms computationally infeasible.
contrasting
train_21008
Thus, they approximate this full computation using beam search, maintaining only the m-best logical forms at each chart cell.
qualitatively, our n-best approximation always represents the most likely parses in the approximation, but the number of represented parses scales only linearly with n. the number of parses represented by the beam search algorithm of the past work can potentially scale exponentially with the beam size, m, due to its use of dynamic programming.
contrasting
train_21009
One solution is to adopt a nonparametric Bayesian method by incorporating a hierarchical prior over the parameters (e.g., a Dirichlet process).
this approach can impose unrealistic restrictions on the model choice and result in intractability which requires sampling or approximate inference to overcome.
contrasting
train_21010
Let ρ w = The similarity of some usage kets |u is obtained, as is common in the literature, by their inner product u j .
as this is a complex value, we multiply it with its complex conjugate, rendering the real value u Therefore, in total the expected similarity of w and v is: We see that the similarity function simply reduces to multiplying ρ w with ρ v and applying the trace function.
contrasting
train_21011
Most recent Open IE systems have targeted verbal relations (Banko et al., 2007;Mausam et al., 2012), claiming that these are the majority.
chan and Dan (2011) show that only 20% of relations in the AcE programs Relation Detection and characterization (RDc) are verbal.
contrasting
train_21012
Typically an Open IE system is tested on one dataset.
because the definition of relation is ambiguous, we believe that is necessary to test with multiple datasets.
contrasting
train_21013
A matching component that uses the phrases "New York," "day care," and "license" is likely to do better.
a better matching component will understand that in the context of this query all three phrases "New York," "day care" and "license" are important, and that "New York" needs to modify "day care."
contrasting
train_21014
Since these papers considered a small number of answer types, rules over the detected relations and answer types could be applied to find the relevant answer.
since our system answers non-factoid questions that can have answer of arbitrary types, we want to use as few rules as possible.
contrasting
train_21015
These studies focused on the exterior characteristics of spoken genres, since they assumed that entire scripts are given in advance and then they extracted keywords that best describe the scripts.
to the best of our knowledge, there is no previous study considered time of utterances which is an intrinsic element of spoken genres.
contrasting
train_21016
information decreases gradually by the exponential nature of forgetting.
whenever the information is repeated, it is recalled longer.
contrasting
train_21017
Existing ontologies are not optimal for solving opaque coreferent mentions because of both a precision and a recall problem (Lee et al., 2011;Uryupina et al., 2011).
using data-driven methods such as distributional semantics for coreference resolution suffers especially from a precision problem (Ng, 2007).
contrasting
train_21018
There are many similarities between paraphrase and coreference, and our work is most similar to that by Wang and Callison-Burch (2011).
some paraphrases that might not be considered to be valid (e.g., under $200 and around $200) can be acceptable coreference relations.
contrasting
train_21019
We then develop novel semantic, syntactic and salience features for this task, showing strong improvements over one of the best known prior models (Poesio et al., 2004).
this local model classifies each anaphor-antecedent candidate pair in isolation.
contrasting
train_21020
There is a partial overlap between bridging and implicit noun roles (Ruppenhofer et al., 2010).
work on implicit noun roles is mostly focused on few predicates (e.g.
contrasting
train_21021
• Bridging is a relatively local phenomenon with 71% of NP antecedents occurring in the same or up to 2 sentences prior to the anaphor.
farther away antecedents are common when the antecedent is the global focus of a document.
contrasting
train_21022
The salience feature in the pairwise model only measures the salience for candidates within the local window.
globally salient antecedents are preferred even if they are far away from the anaphor.
contrasting
train_21023
Recall that existing approaches have relied primarily on morphosyntactic features as well as a few semantic features extracted from WordNet synsets and VerbOcean's (Chklovski and Pantel, 2004) semantic relations.
we propose not only novel lexical and grammatical features, but also sophisticated features involving semantics and discourse.
contrasting
train_21024
Hence, unlike syntactic dependencies and predicateargument relations through which we can identify intra-sentential temporal relations, discourse relations can potentially be exploited to discover both inter-sentential and intra-sentential temporal relations.
no recent work has attempted to use discourse relations for temporal relation clas-(6) { Arg1 Hewlett-Packard Co. said it raised its stake in Octel Communications Corp. to 8.5% of the common shares outstanding.
contrasting
train_21025
The rightmost three columns correspond to the three ways of combining rules and machine learning described in Section 4.3.
the rows of the table differ in terms of what features are available to a system.
contrasting
train_21026
At most, one might think that reordering is lexicalized-perhaps, (for instance) in translating from Chinese to English, or from Arabic to English, there are certain words whose English translations tend to undergo long-distance movement from their original positions, while others stay close to their original positions.
one would not expect a particular Chinese adverb or a particular Arabic noun to undergo long-distance movement when being translated into English in one domain, but not in others.
contrasting
train_21027
Many SMT systems, including our own, still use this distance-based penalty as a feature.
starting with (Tillmann and Zhang, 2005;Koehn et al., 2005), a more sophisticated type of reordering model has often been adopted as well, and has yielded consistent performance gains.
contrasting
train_21028
The resulting mixture model is deficient (non-normalized), but easy to fix by backing off to a global distribution such as p(o) in equation 1.
we found that this "fix" caused large drops in performance, for instance from the Arabic BLEU score of 48.3 reported in table 3 to 46.0.
contrasting
train_21029
Several alternatives now exist: MIRA (Watanabe et al., 2007;Chiang et al., 2008), PRO (Hopkins and May, 2011), linear regression (Bazrafshan et al., 2012) and ORO (Watanabe, 2012) among others.
these approaches optimize towards the best score as reported by a single evaluation metric.
contrasting
train_21030
In (Duh et al., 2012) the choice of which option to use rests with the MT system developer and in that sense their approach is an a posteriori method to specify the preference (Marler and Arora, 2004).
to this, our tuning framework provides a principled way of using the Pareto optimal options using ensemble decoding .
contrasting
train_21031
Research in SMT parameter tuning has seen a surge of interest recently, including online/batch learning (Watanabe, 2012;Cherry and Foster, 2012), large-scale training (Simianer et al., 2012;He and Deng, 2012), and new discriminative objectives (Gimpel and Smith, 2012;Zheng et al., 2012;Bazrafshan et al., 2012).
few works have investigated the multi-metric tuning problem in depth.
contrasting
train_21032
This is advantageous because the optimal weighting may not be known in advance.
the notion of Pareto optimality implies that multiple "best" solutions may exist, so the MT system developer may be forced to make a choice after tuning.
contrasting
train_21033
inal PMO-PRO seeks to maximize the points on the Pareto frontier (blue curve in the figure) leading to Pareto-optimal solutions.
the PMO ensemble combines the different Paretooptimal solutions and potentially moving in the direction of dashed (green) arrows to some point that has higher score in either or both dimensions.
contrasting
train_21034
This has been the general motivation for considering the linear combination of metrics (Cer et al., 2010;Servan and Schwenk, 2011) resulting in a joint metric, which is then optimized.
due to the scaling differences between the scores of different metrics, the linear combination might completely suppress the metric having scores in the lower-range.
contrasting
train_21035
All the proposed methods fit naturally within the usual SMT tuning framework.
some changes are required in the decoder to support ensemble decoding and in the tuning scripts for optimizing with multiple metrics.
contrasting
train_21036
Figures 3(b) and 4(a) show that none of the proposed methods managed to improve the baseline scores for METEOR and TER.
several of our ensemble tuning combinations work well for both ME-TEOR (BR, BMRTB3, etc.)
contrasting
train_21037
In the popular cube pruning algorithm, every hypothesis is annotated with boundary words and permitted to recombine only if all boundary words are equal.
many hypotheses share some, but not all, boundary words.
contrasting
train_21038
Many features are additive: they can be expressed as weights on edges that sum to form hypothesis features.
log probability from an N -gram language model is non-additive because it examines surface strings across edge and vertex boundaries.
contrasting
train_21039
Their work bears some similarity to our algorithm in that partially overlapping state will be collapsed and efficiently handled together.
the key advatage to our approach is that groups have a score that can be used for pruning before the group is expanded, enabling pruning without first constructing the intersected automaton.
contrasting
train_21040
There are several forms of coarse-to-fine search; the closest to our work increases the language model order each iteration.
by operating inside search, our algorithm is able to handle hypotheses at different levels of refinement and use scores to choose where to further refine hypotheses.
contrasting
train_21041
The clusters that have no member noun were hidden from the ranking since they do not explicitly represent any concept.
these clusters are still part of the organisation of conceptual space within the model and they contribute to the probability for the clusters on upper levels (eq.
contrasting
train_21042
Another important reason AGG fails is that it by definition organises all concepts into tree and optimises its solution locally, taking into account a small number of clusters at a time.
being able to discover connections between more distant domains and optimising globally over all concepts is crucial for metaphor identification.
contrasting
train_21043
We manually compiled sets of typical features for the 10 source domains, and measured their recall among the top 50 HGFC features at R = 0.70.
in practice the coverage in this task would directly depend on that of the metaphorical associations.
contrasting
train_21044
Across all documents, a perfect agreement was never achieved, which confirms the difficulty of annotating such a subjective task: The average lccm per document on the manual annotations is 0.56 (dense chains), respectively 0.54 (merged chains).
the considerable overlap between the annotators still enables us to evaluate automatic chaining methods, and the lccm agreement score serves as an upper bound.
contrasting
train_21045
The former outperforms the previous best system UTD N B in both Spearman's ρ and MaxDiff accuracy, where the differences are statistically significant 8 ; the latter has comparable performance, where the differences are not statistically significant.
while the IsA relation from Probase is exceptionally good in identifying Class-Inclusion relations, with high Spearman's ρ = 0.619 and MaxDiff accuracy 8 We conducted a paired-t test on the results of each of the 69 relation.
contrasting
train_21046
Notice that the main purpose of the ablation study is to verify the importance of an individual component model when a significant performance drop is observed after removing it.
occasionally the overall performance may go up slightly.
contrasting
train_21047
Lexical features are important as they can capture semantic meaning precisely.
given that we do not have many labeled examples, lexical features can lead to overfitting.
contrasting
train_21048
The same authors proposed an unsupervised method which relies on associations of aspects with topics indicative of stances mined from the Web for the task (Somasundaran and Wiebe, 2009).
our model is also an unsupervised one but we do not rely on any external knowledge.
contrasting
train_21049
Recently, Mukherjee and Liu (2012) proposed an unsupervised model to extract different types of expressions including agreement/disagreement expressions.
our focus is not to detect agreement/disagreement expressions but to model the interplay between agreement/disagreement expressions and viewpoints.
contrasting
train_21050
One possible solution is to use supervised learning, which requires training data (Galley et al., 2004;Abbott et al., 2011;Andreas et al., 2012).
training data are also likely domain and language dependent, which makes them hard for re-use.
contrasting
train_21051
For example, in the above sentence, the author clearly expressed that he/she wanted to buy a car.
an example of an implicit sentence is "Anyone knows the battery life of iPhone?"
contrasting
train_21052
Some researchers also used topic modeling of both domains to transfer knowledge (Gao & Li, 2011;He, Lin & Alani, 2011).
none of these methods deals with the two problems/difficulties of our task.
contrasting
train_21053
Assuming that word probabilities are independent given a class, we have the NB classifier: The EM algorithm basically builds a classifier iteratively using NB and both the labeled source data and the unlabeled target data.
the major shortcoming is that the feature set, even with feature selection, may fit the labeled source data well but not the target data because the target data has no labels to be used in feature selection.
contrasting
train_21054
EM can select features only before iterations using the labeled source data and keep using the same features in each iteration.
these features only fit the labeled source data but not the target data.
contrasting
train_21055
The gain of iteration 1 shows that incorporating the target domain data (unlabeled) is helpful.
the selected features from source domains can only fit the labeled source data but not the target data, which was explained in Section 3.1.
contrasting
train_21056
Selecting features only from the target domain makes sense since it can reflect target domain data well.
it also becomes worse with the increased number of iterations, due to strong positive features.
contrasting
train_21057
showed that by selectively sharing parameters based on typological features of each language, substantial improvements can be achieved, compared to using a single set of parameters for all languages.
these methods all employ generative models with strong independence assumptions and weak feature representations, which upper bounds their accuracy far below that of feature-rich discriminative parsers (McDonald et al., 2005;Nivre, 2008).
contrasting
train_21058
We see that Delex performs well on target languages that are related to the majority of the source languages.
for languages 3 Model "D-,To" in Table 2 from Naseem et al.
contrasting
train_21059
As a natural first attempt at sharing parameters, one might consider forming the crossproduct of all features of Delex with all WALS properties, similarly to a common domain adaptation technique (Daumé III, 2007;Finkel and Manning, 2009).
this approach has two issues.
contrasting
train_21060
Similarly Täckström (2012) used self-training to adapt a multi-source direct transfer named-entity recognizer to different target languages, "relexicalizing" the model with word cluster features.
as discussed in §5.2, standard self-training is not optimal for target language adaptation.
contrasting
train_21061
Methods for learning with ambiguous labelings have previously been proposed in the context of multi-class classification (Jin and Ghahramani, 2002), sequence-labeling (Dredze et al., 2009), log-linear LFG parsing (Riezler et al., 2002), as well as for discriminative reranking of generative constituency parsers (Charniak and Johnson, 2005).
to Dredze et al., who allow for weights to be assigned to partial labels, we assume that the ambiguous arcs are weighted uniformly.
contrasting
train_21062
This is not surprising given that the base parser simply observes this phrase as DET NOUN NOUN.
looking at the arc marginals we can see that the correct analysis is available during AAST, although the actual marginal probabilities are quite misleading.
contrasting
train_21063
ADTree (Freund & Mason, 1999) with ten boosting iterations performed best, with 91% recall and 91% precision under 10-fold cross-validation.
the ADTree models were overfitted to the AdventureWorks domain.
contrasting
train_21064
This is because the most specific intelligible attribute is slightly ambiguous, and C must occasionally supply extra constraints to disambiguate.
with both specificity and vocabulary selection, L achieves a mean dialogue length of 11.3, requiring only two turns more than C to order a book.
contrasting
train_21065
By considering the frequency distributions from lexical category co-occurrence, they produced a set of pseudowords which were closer to real ambiguous words in terms of disambiguation difficulty than random pseudowords.
this approach requires a specific hierarchical lexicon and falls short of creating many pseudowords with high polysemy (the authors report generating pseudowords with two senses only).
contrasting
train_21066
We evaluate our method on existing annotated datasets from various AMT tasks.
we also want to ensure that our model can handle adversarial conditions.
contrasting
train_21067
Raw agreement is thus a very simple measure.
cohen's κ corrects the agreement between two annotators for chance agreement.
contrasting
train_21068
We also compute the κ values for each pair of annotators, and average them for each annotator (similar to the approach in Tratz and Hovy (2010)).
whenever one label is more prevalent (a common case in NLP tasks), κ overestimates the effect of chance agreement (Feinstein and Cicchetti, 1990) and penalizes disproportionately.
contrasting
train_21069
The previous sections showed that our model reliably identifies trustworthy annotators.
we also want to find the most likely correct answer.
contrasting
train_21070
Increasing the required majority also improves accuracy, although not as much, and the loss in coverage is larger and cannot be controlled.
our method allows us to achieve better accuracy at a smaller, controlled loss in coverage.
contrasting
train_21071
Annotators, and Supervision Adverse Strategy We showed that our model recovers the correct answer with high accuracy.
to test whether this is just a function of the annotator pool, we experiment with varying the trustworthiness of the pool.
contrasting
train_21072
The results of the various models are presented in table 5; multiplicative represents Mitchell and Lapata's (2008) In the contextualized version of the similarity task (in which the landmark is combined with subject and object), all three models obtain the same result (.32).
in the non-contextualized version (in which only the target verb is combined with subject and object), the models differ in performance.
contrasting
train_21073
Twitter offers an unprecedented advantage on live reporting of the events happening around the world.
summarizing the Twitter event has been a challenging task that was not fully explored in the past.
contrasting
train_21074
Most previous summarization studies focus on the well-formatted news documents, as driven by the annual DUC 2 and TAC 3 evaluations.
the Twitter messages (a.k.a., tweets) are very short and noisy, containing nonstandard terms such as abbreviations, acronyms, emoticons, etc.
contrasting
train_21075
Although coherence has been used in ordering of summary sentences (Section 2.2), this work is limited by the quality of summary sentences given as input.
g-FLOW incorporates coherence in both selection and ordering of summary sentences.
contrasting
train_21076
In general, the normalizer of p w and the expectation over p w cannot be computed directly, since there may be exponentially many coherent subsets of anchored rules.
we note that A and its corresponding S(A) form a segmentation of the base form b, with features decomposing over individual segments.
contrasting
train_21077
As a concrete example of an error our model does make, Löwe (lion) is incorrectly predicted to have the first suffix, instead of the correct suffix (not shown) which adds an -n for accusative, genitive, and dative singular as well.
making this prediction correctly is essentially beyond the capacity of a model based purely on orthography.
contrasting
train_21078
In fact, our model induces at most 4-6 roles (even if |R| is much larger).
bayes predicts more than 30 roles for the majority of frequent predicates (e.g., 43 roles for the predicate include or 35 for say).
contrasting
train_21079
Ideally, two predicates can only align when their arguments are coreferent.
in practice we may incorrectly resolve argument links, or there may be implicit arguments that do not appear as syntactic dependencies of the predicate trigger.
contrasting
train_21080
MRR is 4 For both rank accuracy and MRR, a perfect score is 100.
mRR places more emphasis on ranking items close to the top of the list, and less on differences in ranking lower in the list.
contrasting
train_21081
Prior research has demonstrated that joint prediction alleviates error propagation inherent in pipeline architectures, where mistakes cascade from one task to the next (Bohnet et al., 2013;Tratz, 2013;Hatori et al., 2012;Zhang et al., 2014a).
jointly modeling all the processing tasks inevitably increases inference complexity.
contrasting
train_21082
As shown in section 5, syntactic features greatly improve semantic parsing.
it is interesting to explore more precisely what kind of syntactic information boosts or penalizes our predictions.
contrasting
train_21083
The use of paths and of the output of a graph-based parser (Bohnet, 2010) favors the capture of complex dependencies and enhances the learning of these constructions for our local transition-based parser.
we also observe that the features are not able to completely stop the loss of F 1 -score for longer sentences.
contrasting
train_21084
Yet, compared to stateof-the-art systems, our results built on the S&T parser score lower than the top performers (Table 10).
we are currently extending a more advanced lattice-aware transition-based parser (DSR) with beams (Villemonte De La Clergerie, 2013) that takes advantage of cutting-edge techniques (dynamic programming, averaged perceptron with early updates, etc.
contrasting
train_21085
Furthermore, the gain brought by PAS DM (T.PARSER+features, this paper) 92.11 89.70 (Du et al., 2014) 92.04 89.40 (Martins andAlmeida, 2014) 91.76 89.16 (DSR, this paper) 90.13 85.66 the second-order features is reduced by half when used jointly with our feature set (+1.09 vs +0.57 with them).
although we could assess that the need of second order models is thus alleviated, the conjunction of both types of features still improves the parser performance by an overall gain of 1.62 points on DM (1.18 on PAS), suggesting that both feature sets contribute to different types of "structures".
contrasting
train_21086
In fact, past research mentioned in Section 2 refers to this exact task as both citation prediction and link prediction.
link prediction is a commonly used phrase which may be used to describe other problems not concerning documents and citation prediction.
contrasting
train_21087
This cursory, qualitative critique of the metrics warrants more research, ideally with humanevaluation.
one can see how these metrics differ: TC and PMI are both entirely concerned with just the co-occurrence of terms, normalized by the general popularity of the said terms.
contrasting
train_21088
In this scenario, we have access to both engineered and log data to train a model.
we do not have access to web search click log data.
contrasting
train_21089
A simple remedy is to use word bi-grams in addition to unigrams (Blitzer et al., 2007;Glorot et al., 2011;Wang and Manning, 2012).
use of word n-grams with n > 1 on text categorization in general is not always effective; e.g., on topic categorization, simply adding phrases or n-grams is not effective (see, e.g., references in (Tan et al., 2002)).
contrasting
train_21090
Algorithm 3: GETPOSSIBLEACTIONS for full tree linearization, where C is a full tree Input: A state s = ([σ|j i], ρ, A) and gold tree C Output: A set of possible transition actions Algorithm 2 can also be used with full-tree constraints, which are a special case of partial-tree constraints.
there is a conceptually simpler algorithm that leverages full-tree constraints.
contrasting
train_21091
Table 6 in (Qazvinian et al., 2013)).
those results are not comparable with ours for the following reasons.
contrasting
train_21092
The results in this table are satisfying: with respect to LSI document similarity, our model outperforms all of the baselines and its value is close to the one achieved by human-generated headlines.
the human evaluations are middling: the word-graphs method produces more readable headlines, but our model proves to be more informative because it does better work at detecting abstract word relationships in the text.
contrasting
train_21093
We cannot use keyword spotting if the goal is to align instructional text to videos.
if our goal is just to create a labeled corpus of video clips, keyword spotting is a reasonable approach.
contrasting
train_21094
DSMs have been very effectively applied to a variety of semantic tasks (Clark, 2015;Mikolov et al., 2013b;Turney and Pantel, 2010).
compared to human semantic knowledge, these purely textual models, just like traditional symbolic AI systems (Harnad, 1990;Searle, 1984), are severely impoverished, suffering of lack of grounding in extra-linguistic modalities (Glenberg and Robertson, 2000).
contrasting
train_21095
Typically, each wetlab experiment has a protocol written in natural language, describing the sequence of steps necessary for that experiment.
these instructions are often incomplete, and do not spell out implicit assumptions and knowledge, causing the results to be difficult to reproduce (Begley and Ellis, 2012).
contrasting
train_21096
The feature weights learned by LSSVM and its variants were smaller than that for LSP (due to regularization).
they always resulted in the same forced decoding alignments in our experiments, and obtained same alignment accuracy.
contrasting
train_21097
Content analysis, a widely-applied social science research method, is increasingly being supplemented by topic modeling.
while the discourse on content analysis centers heavily on reproducibility, computer scientists often focus more on scalability and less on coding reliability, leading to growing skepticism on the usefulness of topic models for automated content analysis.
contrasting
train_21098
Until recently, the goal of training open-domain conversational systems that emulate human conversation has seemed elusive.
the vast quantities of conversational exchanges now available on social media websites such as Twitter and Reddit raise the prospect of building data-driven models that can begin to communicate conversationally.
contrasting
train_21099
good job in explaining away 1-gram and 2-gram message matches (Figure 4 (a)) and the performance gain mainly comes from context matches.
we observe that 4-gram matches may be important in selecting appropriate responses.
contrasting