id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_10800
The rest are due to other causes such as OLLIE's ability to handle relationships mediated by nouns and adjectives, or REVERB's shallow syntactic analysis, etc.
oLLIE misses very few extractions returned by REVERB, mostly due to parser errors.
contrasting
train_10801
It also indicates if it contributes a new table, or a count to t i,j,k for the PDP at this node.
as we discussed above, this then contributes to either t i,j,k or s i,j,k .
contrasting
train_10802
Figure 8 shows similar topic evolution plots for LDA, STM and AdaTM.
the AdaTM topic evolutions are much clearer for the less frequent topics, as shown in Figure 8(c).
contrasting
train_10803
To some extent, our model is similar to Semi-Markov Conditional Random Fields (called a Semi-CRF), in which the segmentation and labeling can also be done directly (Sarawagi and Cohen, 2004).
semi-CRF just models label dependency, and it cannot capture more correlations between adjacent chunks, as is done in our approach.
contrasting
train_10804
It is linear in the length of sentence.
the constant in the O is relatively large.
contrasting
train_10805
Otherwise, we use a simulated annealing strategy where hypotheses with a lower score can still be accepted with a certain probability which depends on the difference between the hypothesis score and Figure 1: Example of a search tree produced by the beam-search decoder for the input In other hands, they might be right.
the highest scoring hypothesis found is they might be right.
contrasting
train_10806
This study presents a novel method that measures English language learners' syntactic competence towards improving automated speech scoring systems.
to most previous studies which focus on the length of production units such as the mean length of clauses, we focused on capturing the differences in the distribution of morpho-syntactic features or grammatical expressions across proficiency.
contrasting
train_10807
An assessment of ESL learners' syntactic competence should consider the structure of sentences as a whole -a task which may not be captured by the simplistic POS tag distribution.
studies of Lu (2010) and Chen and Zechner (2011) showed that more complex syntactic features are unreliable in ASR-based scoring system.
contrasting
train_10808
Excitation values are determined by spreading activation applied to the network, given a small number of manually prepared seed templates.
we cannot construct the network unless we know whether each noun pair is PNP or NNP, due to the configuration of the constraint matrix, and currently we have no feasible method to classify all of them into PNPs and NNPs in advance.
contrasting
train_10809
For example, the pair emit smells ⊥ reduce smells is not logically contradictory since the two events can happen at the same time.
they become almost contradictory when their tendencies get stronger (i.e., emit smells more strongly ⊥ thoroughly reduce smells).
contrasting
train_10810
(1) a. look like ⇔ resemble b. control system ⇔ controller The challenge in acquiring paraphrases is to ensure good coverage of the targeted classes of paraphrases along with a low proportion of incorrect pairs.
no matter what type of resource has been used, it has proven difficult to acquire paraphrase pairs with both high recall and high precision.
contrasting
train_10811
In terms of coverage, P Hvst is expected to be greatly larger than P Seed , although it will not cover totally different pairs of paraphrases, such as those shown in (1).
the quality of P Hvst depends on that of P Seed .
contrasting
train_10812
Thus, as long as at least one of them has a probability higher than the given threshold value, corresponding novel paraphrases can be harvested.
as a results of assessing each individual paraphrase pair by the contextual similarity, many pairs in P Hvst , which are supposed to be incorrect instances of their corresponding pattern, are filtered out by a larger threshold value for th s .
contrasting
train_10813
On the other hand, as a results of assessing each individual paraphrase pair by the contextual similarity, many pairs in P Hvst , which are supposed to be incorrect instances of their corresponding pattern, are filtered out by a larger threshold value for th s .
many pairs in P Seed have a relatively high similarity, e.g., 40% of all pairs have similarity higher than 0.4.
contrasting
train_10814
To obtain a better κ value, the criteria for grading will need to be improved.
we think that was not too low either 8 .
contrasting
train_10815
Several computational models have also investigated this interaction by adding manually annotated part-of-speech tags as input to word learning algorithms, and suggesting that integration of lexical categories can boost the performance of a cross-situational model (Yu 2006, Alishahi and.
none of the existing experimental or computational studies have examined the acquisition of word meanings and lexical categories in parallel.
contrasting
train_10816
2008, Chrupała and Alishahi 2010.
explicit accounts of how such categories can be integrated in a crosssituational model of word learning have been rare.
contrasting
train_10817
The model of Niyogi (2002) simulates the mutual bootstrapping effects of syntactic and semantic knowledge in verb learning, that is the use of syntax to aid in inducing the semantics of a verb, and the use of semantics to narrow down possible syntactic frames in which a verb can participate.
this model relies on manually assigned priors for associations between syntactic and semantic features, and is tested on a toy language with very limited vocabulary and a constrained syntax.
contrasting
train_10818
verbs and nouns) can indeed help word learning in a more naturalistic incremental setting.
the model of Alishahi and Fazly (2010) integrates manually annotated part-ofspeech tags into an incremental word learning algorithm, and shows that these tags boost the over-all word learning performance, especially for infrequent words.
contrasting
train_10819
The weight of edges between different nodes is also measured by document similarity.
there are no edges between nodes and their initial sentiments because RANK is an iterative algorithm and each iteration gives new scores to unlabeled nodes while labeled nodes remain constant.
contrasting
train_10820
The relation between them is no relation, since it is unclear which occurs first.
e 5 and e 3 both happen in the interval I 2 but they form an overlap relation.
contrasting
train_10821
In these challenges, several temporal-related tasks were defined including the tasks of identifying the temporal relation between an event mention and a temporal expression in the same sentence, and recognizing temporal relations of pairs of event mentions in adjacent sentences.
with several restrictions imposed to these tasks, the developed systems were not practical.
contrasting
train_10822
Some of these predictors, including the identity -or even number (McClosky, 2008) -of alreadygenerated siblings, can be prohibitively expensive in sentences above a short length k. For example, they break certain modularity constraints imposed by the charts used in O(k 3 )-optimized algorithms (Paskin, 2001a;Eisner, 2000).
in bottom-up parsing and training from text, everything about the yield -i.e., the ordered sequence of all already-generated descendants, on the side of the head that is in the process of spawning off an additional child -is not only known but also readily accessible.
contrasting
train_10823
(In fact, 45% of all head-percolated dependencies in WSJ are between adjacent words.)
some common constructions are more remote: e.g., subordinating conjunctions are, on average, 4.8 tokens away from their dependent modal verbs.
contrasting
train_10824
As we are using the same data set as per the previous approach, we perform 5-fold cross validation as well.
the training for each fold is conducted with a different grammar consisting of only the vocabulary that occur in each training fold.
contrasting
train_10825
The work of Ahmed and Xing (2010) generalizes DTMs to iDTMs (infinite DTMs) by allowing topics to span only a subset of time slices, and allowing an arbitrary number of topics.
iDTMs still require placing documents into discrete epochs, and the issue of generating topic rather than document threads remains.
contrasting
train_10826
Metro maps are effectively sets of non-chronological threads that are encouraged to intersect and thus create a "map" of events and topics.
these approaches assume some prior knowledge about content.
contrasting
train_10827
For the datasets we use later, the actual number is around 2 1000 .
we will show how to construct the desired model in a way that allows efficient inference, even for large datasets, using determinantal point processes (DPPs).
contrasting
train_10828
It can equally be shown that the same holds for the constraints OneGP and NoGP.
when working with LP relaxations, the two polytopes have different fractional vertices.
contrasting
train_10829
Our work is most similar in spirit to the relaxation method presented by Riedel and Smith (2010) that incrementally adds second order edges to a graphical model based on a gain measure-the analog of our reduced cost.
they always score every higher order edge, and also provide no certificates of optimality.
contrasting
train_10830
Previously, domain adaptation learning has been successfully used in other NLP tasks such as relation extraction (Jiang, 2009) and POS tagging (Jiang and Zhai, 2007), semantic detection (Tan et al., 2008), name entity recognition (Guo et al., 2009) and entity type classification (Jiang and Zhai, 2007).
to the best of our knowledge, it has yet to be explored for coreference resolution.
contrasting
train_10831
Some of them have been success-fully used in coreference resolution (Pang and Fan, 2009;Munson et al., 2005;Rahman and Ng, 2011a).
these methods only focus on the withindomain setting.
contrasting
train_10832
With this feature set, we found that the linear kernel is insufficient to fit the training data.
using an rbf kernel would be too computationally expensive.
contrasting
train_10833
This is common among all ensemble methods.
the costs in (4) and 9are trivial as both are at the document level.
contrasting
train_10834
It is also possible to self-train a semantic parser without any labeled data (Goldwasser et al., 2011).
this approach does not perform as well as more supervised approaches, since the parser's self-training predictions are not constrained by the correct logical form.
contrasting
train_10835
If a source language L v is a resource-rich language, then the language model P (v I 1 ) can be well estimated from sufficient training texts.
if the source lan-guage L v is a resource-poor language, then the language model P (v I 1 ) cannot be reliably or robustly estimated due to lack of training texts.
contrasting
train_10836
Both ITG constraints and other constraints assume that all permutations are equally probable.
it makes sense to restrict those non-monotonic reorderings when performing the translation.
contrasting
train_10837
employ queries of the form "X of Y", where X and Y would be replaced with the wheel and the car, respectively.
we are not targeting a particular type of relation.
contrasting
train_10838
Specifically, if X is a verb, then it resolves the target pronoun to the candidate antecedent that has the same grammatical role as the pronoun.
if X is an adjective and the sentence does not involve comparison, then it resolves the target pronoun to the candidate antecedent serving as the subject of V .
contrasting
train_10839
Hence, CBR(i 1 )=1 and CBR(i 2 )=0.
had the triple occurred less than 100 times, both of these features would have been set to zero.
contrasting
train_10840
Hence, if the Stanford resolver decides to resolve the target pronoun, it will resolve it to one of the two candidate antecedents.
if it does not have enough confidence about resolving it, it will leave it unresolved.
contrasting
train_10841
Given that the Random baseline correctly resolves 50% of pronouns and the Stanford resolver correctly resolves only 40.1% of the pronouns, it is tempting to conclude that Stanford does not perform as well as Random.
recall that Stanford leaves 30.1% of the pronouns unresolved.
contrasting
train_10842
Narrative chains, on the other hand, are useful for capturing the relationship between the events described in the two clauses.
they are computed over verbs, and therefore cannot capture such a relationship when one or both of the events involved are not described by verbs.
contrasting
train_10843
For example, in Figure 3 (b), although label "root" is a parent of label "Computers & Internet", the topical words of label "Computers & Internet" show the topical node is not a child of label "root".
in Figure 3 (a), label "root" and "Computers & Internet" has corresponding parent-child relation between their topical words.
contrasting
train_10844
We can also obtain similar experimental results over Y Ans and O Hlth.
for the same reason of limitation of space, their detailed descriptions are skipped in this paper.
contrasting
train_10845
This is not surprising given the independent predictions of the model and the very general, language universal assumptions we have made in the model structure and feature sets.
in terms of gauging the usefulness of the hidden syntactic marginalization method the results are extremely compelling, with only marginal differences between the performance of the observed-syntax model, especially relative to the baseline.
contrasting
train_10846
The highest accuracy achieved to date under these assumptions is 91.6% (Ravi et al., 2010).
as is often noted (including by the authors themselves), many papers that work on learning taggers from tag dictionaries make unrealistic assumptions about the tag dictionaries they use as input (Toutanova and Johnson, 2008;Ravi and Knight, 2009;Hasan and Ng, 2009).
contrasting
train_10847
In the Greedy Set Cover phase this means choosing the tag bigram that would cover the most new tokens, and in the Greedy Path Completion phase this means choosing the tag bigram that would fill the most holes.
it is frequently the case that there are many distinct tag bigrams that would cover the most new tokens or fill the most holes, leaving the MIN-GREEDY algorithm with no choice but to randomly select from these options.
contrasting
train_10848
As was described above, the output of MIN-GREEDY's second stage is a minimized set of tag bigrams which is used as a constraint on the first iteration of the third stage, Iterative Model-Fitting.
in order to determine when to stop adding new bigrams during the first two phases, the MIN-GREEDY algorithm must try to find complete tag paths through each sentence in the raw corpus, stopping once a tag path has been found for each one.
contrasting
train_10849
This indicates that if the tag dictionary has a low degree of ambiguity, then MIN-GREEDY can make the situation worse.
with our smoothing techniques, we regain similar improvements as with the HMM.
contrasting
train_10850
This motivates an intensive study in automatically resolving person name ambiguity in various web applications.
resolving web person name ambiguity is not a trivial task.
contrasting
train_10851
Intuitively, each cluster will be treated as a topic.
we found that hub nodes usually correspond to general concepts which may be related to many topics, but with a loose relatedness.
contrasting
train_10852
Intuitively, each cluster will be treated as a topic.
we found that hub nodes usually correspond to general concepts, e.g., education or public, which may be related to many topics, but with a loose relatedness.
contrasting
train_10853
Intuitively, larger threshold can prune more unimportant edges and improve the disambiguation performance.
if the threshold is too large, we may prune important edges and harm the results.
contrasting
train_10854
First, DeNero and Uszkoreit (2011) learn a reordering model through a three-step process of bilingual grammar induction, training a monolingual parser to reproduce the induced trees, and training a reordering model that selects a reordering based on this parse structure.
our method trains the model in a single step, treating the parse structure as a latent variable in a discriminative reordering model.
contrasting
train_10855
The above features φ and their corresponding weights w are all that are needed to calculate scores of derivation trees at test time.
during training, it is also necessary to find model parses according to the loss-augmented scoring function S(D|F, w)+L(D|F, A) or oracle parses according to the loss L(D|F, A).
contrasting
train_10856
lader is able to improve over the orig baseline in all cases, but when equal numbers of manual and automatic alignments are used, the reorderer trained on manual alignments is significantly better.
as the number of automatic alignments is increased, accuracy improves, approaching that of the system trained on a smaller number of manual alignments.
contrasting
train_10857
The former matches most linguistic theories while the latter does not, but to a monolingual parser, these conventions are equally learnable.
once bilingual data is involved, such treebank conventions entail constraints on rule extraction that may not be borne out by semantic alignments.
contrasting
train_10858
The lowest VP in this tree is headed by 'select,' which aligns to the Chinese verb ' 挑选.'
' 挑 选' also aligns to the other half of the English infinitive, 'to,' which, following common English linguistic theory, is outside the VP.
contrasting
train_10859
Each classification decision is made independently, allowing for inconsistency at multiple levels (within a fluent, across fluents, or across entities).
using joint inference, the classifier component can determine the best overall span for each fluent.
contrasting
train_10860
To learn the model parameters, we start by using maximum-likelihood estimation for these multinomials from training entities.
some smoothing is required since new entities may contain previously unseen answers to existing questions.
contrasting
train_10861
Notice that there is usually only one headquarters at any point in time, although the location of a headquarters can change.
the relation arg1 is funded by arg2 is a non-unique relation since it is likely that there exist more than one funder.
contrasting
train_10862
Previous studies on discourse analysis have been quite successful in identifying what machine learning approaches and what features are more useful for automatic discourse segmentation and parsing (Soricut and Marcu, 2003;Subba and Eugenio, 2009;du-Verle and Prendinger, 2009).
all the proposed solutions suffer from at least one of the following two key limitations: first, they make strong independence assumptions on the structure and the labels of the resulting DT, and typically model the construction of the DT and the labeling of the relations separately; second, they apply a greedy, suboptimal algorithm to build the structure of the DT.
contrasting
train_10863
They use a very large set of features in their parser.
taking a radically-greedy approach, they model structure and relations separately, and ignore the sequence dependencies in their models.
contrasting
train_10864
The cell [i, j] in the DPT represents the span containing EDUs i through j and stores the probability of a constituent R[i, m, j], where m = argmax i≤k≤j P (R[i, k, j]).
to HILDA which implements a greedy algorithm, our approach finds a DT that is globally optimal.
contrasting
train_10865
Read as a whole, it is clear that the two texts describe the same three events, in the same order, and thus, e.g., 1.2 and 2.2 are paraphrases.
they share very few n-grams, nor named entities.
contrasting
train_10866
Intuitively we expected the MSA-based systems to end up with a higher recall than the clustering baselines, because sentences can be matched even if their similarity is moderate or low, but their discourse context is highly similar.
this is only the case for the system using BLEU scores, but not for the system based on the vector space model.
contrasting
train_10867
We mainly compute precision for this task, as the recall of paraphrase fragments is difficult to define.
we do include a measure we call productivity to indicate the algorithm's completeness.
contrasting
train_10868
It is what federal support should try to achieve (Hajič et al., 2009).
the approach can only generate projective word orders (which can be drawn without any crossing edges).
contrasting
train_10869
In our case, it could potentially be beneficial for both the lifting classifier, and for the linearizer.
we found that marking liftings at best gave similar results as not marking, so we kept the original labels without marking.
contrasting
train_10870
Whether gold standard part-of-speech tags or distributional categories are better suited to applications like parsing or machine translation can be best decided using extrinsic evaluation.
in this study we follow previous work and evaluate our results by comparing them to gold standard part-of-speech tags.
contrasting
train_10871
Conversely, both Non Negative Matrix factorization and Latent Dirichlet Allocation learn concise and coherent topics and achieved similar performance on our evaluations.
nMF learns more incoherent topics than LDA and SVD.
contrasting
train_10872
Phrase-based machine translation models have shown to yield better translations than Word-based models, since phrase pairs encode the contextual information that is needed for a more accurate translation.
many phrase pairs do not encode any relevant context, which means that the translation event encoded in that phrase pair is led by smaller translation events that are independent from each other, and can be found on smaller phrase pairs, with little or no loss in translation accuracy.
contrasting
train_10873
Consequently, the lexical translation entry for Word-based models splits the probabilistic mass between different translations, leaving the choice based on context to the language model.
in Phrase-based Models, we would have a phrase pair p(in the box, dentro da caixa) and p(in china, na china), where the words "in the box" and "in China" can be translated together to "dentro da caixa" and "na China", which substantially reduces the ambiguity.
contrasting
train_10874
In this case, both the translation and language models contribute to find the best translation based on the local context, which generally leads to better translations.
not all words add the same amount of contextual information.
contrasting
train_10875
This is generally not desired, since the 2 smaller phrase pairs can be used to translate the same source sentence with a small probabil-ity loss (5%), even if the longer phrase is pruned.
if the smaller phrases are pruned, the longer phrase can not be used to translate smaller chunks, such as "the key in Portugal".
contrasting
train_10876
The example above shows an extreme case, where the event encoded in the phrase pair p(John in, John em) is decomposed into independent events, and can be removed without changing the model's prediction.
finding and pruning phrase pairs that are independent, based on smaller events is impractical, since most translation events are not strictly independent.
contrasting
train_10877
Ideally, we would want to minimize the relative entropy for all possible source and target sentences, rather than all phrases in our model.
minimizing such an objective function would be intractable due to reordering, since the probability assigned to a phrase pair in a sentence pair by each model would depend on the positioning of all other phrase pairs used in the sentence.
contrasting
train_10878
In the example above, where s="John in Portugal" and t="John em Portugal", the decoder would choose the derivation with the highest probability from s to t. Using the unpruned model, the possible derivations are either using phrase p(s, t) or one element of its support set S 1 , S 2 or S 3 .
on the pruned model where p(s, t) does not exist, only S 1 , S 2 and S 3 can be used.
contrasting
train_10879
One possible solution to address this problem is to perform pruning iteratively, from the smallest phrase pairs (number of words) and increase the size at each iteration.
we find this undesirable, since the model will be biased into removing smaller phrase pairs, which are generally more useful, since they can be used in multiple derivation to replace larger phrase pairs.
contrasting
train_10880
In theory, the multinomial distribution should yield better results, since the pruning model will prefer to prune phrase pairs that are more likely to be observed.
longer phrase pairs, which tend compete with other long phrase pairs on which get pruned first.
contrasting
train_10881
Given the shorter phrases from the table, the probability would be 0.7189 • 0.4106 • 0.0046 = 0.0014 * , which is about an order of a magnitude smaller than the original probability of 0.0128.
composing the phrase the French government out of shorter phrases has probability 0.7189 • 0.4106 • 0.6440 = 0.1901, which is very close to the original probability of 0.1686.
contrasting
train_10882
The results in Figure 2 to Figure 5 show that entropy-based pruning clearly outperforms the alternative pruning methods.
it is a bit hard to see from the graphs exactly how much additional savings it offers over other methods.
contrasting
train_10883
We can see that the pPDA extension gave modest improvements on the Urdu test set, but at a small decrease in performance on the Arabic data.
for Chinese, there is a substantial gain, particularly with jump distances of five or longer.
contrasting
train_10884
For example, all else equal, larger metric gains will tend to be more significant.
what does this relationship look like and how reliable is it?
contrasting
train_10885
Naively, it might seem like we would then check how often A beats B by more than δ(x) on x (i) .
there's something seriously wrong with these x (i) as far as the null hypothesis is concerned: the x (i) were sampled from x, and so their average δ(x (i) ) won't be zero like the null hypothesis demands; the average will instead be around δ(x).
contrasting
train_10886
Do and Roth (2010) use a KB (YAGO) to aid the generation of features from free text.
their method is designed specifically for extracting hierarchical taxonomic structures, while our algorithm can be used to discover relations for general general graph-based KBs.
contrasting
train_10887
The past decade has seen some promising solutions, unsupervised relation extraction (URE) algorithms that extract relations from a corpus without knowing the relations in advance.
most algorithms (Hasegawa et al., 2004, Shinyama and Sekine, 2006, Chen et.
contrasting
train_10888
Recently, Kok and Domingos (2008) proposed Semantic Network Extractor (SNE), which generates argument semantic classes and sets of synonymous relation phrases at the same time, thus avoiding the requirement of tagging relation arguments of predefined types.
sNE has 2 limitations: 1) Following previous URE algorithms, it only uses features from the set of input relation instances for clustering.
contrasting
train_10889
For example, SNE placed relation instances <Barbara, grow up in, Santa Fe> and <John, be raised mostly in, Santa Barbara> into 2 different clusters because the arguments and phrases do not share features nor could be grouped by SNE's mutual clustering.
wEBRE groups them together.
contrasting
train_10890
follow (ii) -we arrive at 'subtree ranking by MaxEnt'.
the 'n-best training' uses global trees and Maximum Entropy for training, so the reason of the difference between 'perceptron with global training' and 'n-best training' is (ii).
contrasting
train_10891
This decomposability assumption provides a fine framework in the case of binary features which fire if a certain linguistic phenomenon occurs.
this is not straightforward in the presence of real valued features.
contrasting
train_10892
The QUERY procedure for CM-CU is identical to Count-Min.
to UPDATE an item "x" with frequency c, first we compute the frequencyĉ(x) of this item from the existing data structure: ) and the counts are updated according to: The intuition is that, since the point query returns the minimum of all the d values, we will update a counter only if it is necessary as indicated by the above equation.
contrasting
train_10893
In PLEB generating random permutations and sorting the bit vectors of size k involves worse preprocessing time than using IRP.
spending more time in pre-processing leads to finding better approximate nearest neighbors.
contrasting
train_10894
If we move down the table, with b = 100, IRP, PLEB, and FAST-PLEB get results comparable to LSH (reaches an upper bound).
using large b implies generating a long potential nearest neighbor list close to the size of the unique context vectors.
contrasting
train_10895
Overall the annotator found almost all the learned words to be similar to the query words.
the algorithm can not differentiate between different senses of the word.
contrasting
train_10896
This rule tests the tendency of two tokens appearing together when either one appears.
this test alone is insufficient, as it often generates coarse-grained results -e.g., baseball glove, softball glove, and Hi-Def TV 2 .
contrasting
train_10897
Such vast corpora have led to leaps in the performance of many language-based tasks: the concept is that simple models trained on big data can outperform more complex models with fewer examples.
this new view comes with its own challenges: principally, how to effectively represent such large data sets so that model parameters can be efficiently extracted?
contrasting
train_10898
Count-Min sketch with conservative update (CM-CU): The QUERY procedure for CM-CU (Cormode, 2009;Goyal and Daumé III, 2011a) is identical to Count-Min.
to UPDATE an item "x" with frequency c, we first compute the frequencyĉ(x) of this item from the existing data structure ( ) and the counts are updated according to: The intuition is that, since the point query returns the minimum of all the d values, we update a counter only if it is necessary as indicated by ( * ).
contrasting
train_10899
From the above experiments, we conclude that tasks sensitive to under-estimation should use the CM-CU sketch, which guarantees over-estimation.
if we are willing to make some underestimation error with less over-estimation error, then LCU-WS and LCU-SWS are recommended.
contrasting