id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_10700
Most often this information is used in the form of relationships between words -for example, how semantically similar two words are, or which nouns are the objects of a verb.
it is likely that humans make use of much higher-level information than the similarity between two concepts when processing language (Abelson, 1981).
contrasting
train_10701
In all three cases, errors from the parser (or POS tagger) may lead to the deletion of valid mentions.
we found the impact of this was small and was outweighed by the number of spurious mentions removed.
contrasting
train_10702
È refers to the probability of coreference link between two mentions produced by our maximum entropy model, and Ü is a binary variable that is set to 1 if two mentions are coreferent, 0 otherwise.
as Finkel and Manning showed, D&B's coreference-only model without transitivity constraints is not really necessary, because they only select the coreference links with probability È ¼ .
contrasting
train_10703
The transitivity constraints are formulated as These constraints ensure that when any two corefrent links (e.g., Ü , Ü ) among three mentions exist, the third one Ü must also be a link.
these constraints also bring huge time and space complexity with Ò ¿ constraints (n is number of candidate mention set M, which is larger than 700 in some documents), and cannot be solved in a restricted time and memory environment.
contrasting
train_10704
The improvement is due to gains of 3-5% in precision for MUC and B 3 , which are counteracted by smaller losses in recall.
cEAFE shows a loss in precision and a similar gain in recall, resulting in a minimal increase in F-score.
contrasting
train_10705
We report results using this scorer in Section 5.
we used the Reconcile-internal versions of scorers to optimize the threshold.
contrasting
train_10706
These results are not directly comparable to ours, as indicated by * and **.
we still see that the performance of our type-level transfer method approaches that of bitext-based methods, which require complex bilingual training for each new language.
contrasting
train_10707
At the same time, our model forces the residual matrices to be small and this is probably the reason why it performs competitively with the Bridge-CCA.
our decoding method, as shown in Eq.
contrasting
train_10708
(2008) and Daumé III and Jagarlamudi (2011) proposed a generative model based on probabilistic canonical correlation analysis, where words are represented by context features and orthographic features 2 .
their experiments showed that orthographic features to be important for effectiveness, which means low per-formance for language pairs with different character types.
contrasting
train_10709
Words such as this occur even if using Lex L , and that number increases when Lex S is used.
the proposed methods are able to utilize all the seeds in order to find equivalents for words such as these.
contrasting
train_10710
This is because synonyms tend to be linked in the similarity graph and have similar seed distributions.
in the co-occurrence graph, synonyms tend to be indirectly linked through mutual context words, so the seed distributions of the two could be far away from each other.
contrasting
train_10711
HAPPY comes next, then DISGUST and FEAR.
at lower confidence levels, HAPPY has equal number of significant features as ANGER and SAD, which is in line with the result in Table 6.
contrasting
train_10712
The sequence of words connecting the two entities is a very good predictor of whether they are related or not.
these paths are completely lexicalized and consequently their performance will be limited by data sparseness.
contrasting
train_10713
We could in theory use an external model of noise to account for these value discrepancies (and the ASR errors we model in the next section).
this would further decrease the probability, as some probability mass currently assigned to the heldout data would have to be reserved for the possibility of string renderings other than those we observe.
contrasting
train_10714
The observed words of each block are generated by repeatedly sampling classes from the block's distribution π, and for each sampled class z, a single word is sampled from the class-specific distribution over words φ z .
under the block HMM, a class z is sampled once from the transition distribution, and words are repeatedly sampled from φ z .
contrasting
train_10715
the occurrence of a topic in one segment makes it likely to appear in the next.
we wish to learn arbitrary transitions, both positive and negative, between the latent classes.
contrasting
train_10716
Numerous previous studies have considered distant or weak supervision from a single relational database as an alternative to manual supervision for information extraction (Hoffmann et al., 2011;Weld et al., 2009;Bellare and McCallum, 2007;Bunescu and Mooney, 2007;Mintz et al., 2009;.
to these systems, our distant supervision NED system provides a meta-algorithm for generating an NED system for any database and any entity type.
contrasting
train_10717
When informally examining the rules induced by our system, we found that CD rules are similar in spirit to those written by rule developers.
the induced CR rules are too fine-grained.
contrasting
train_10718
One would then union together the candidates from each of the two groups into two different views, e.g., Per-FirstLast and PerLastCommaFirst, and write filter rules at the higher level of these two views, e.g., "Remove PerLastCommaFirst spans that overlap with a PerFirstLast span".
our induction algorithm considers CR rules consisting of combinations of CD rules directly, leading to many semantically similar CR rules, each operating over small parts of a larger semantic group (see rule in Section 6.1).
contrasting
train_10719
Sentiment classification has become a hot research topic in NLP community and various kinds of classification methods have been proposed, such as unsupervised learning methods (Turney, 2002), supervised learning methods (Pang et al., 2002), semi-supervised learning methods (Wan, 2009;Li et al., 2010), and cross-domain classification methods (Blitzer et al., 2007;Li and Zong, 2008;He et al., 2011).
imbalanced sentiment classification is relatively new and there are only a few studies in the literature.
contrasting
train_10720
(2011b) focus on supervised learning for imbalanced sentiment classification and propose a clustering-based approach to improve traditional under-sampling approaches.
the improvement of the proposed clustering-based approach over under-sampling is very limited.
contrasting
train_10721
Their simulation experiment on text categorization confirms that selecting class-balanced samples is more important than traditional active selection strategies like uncertainty.
the proposed experiment is simulated and non real strategy is proposed to balance the class distribution of the selected samples.
contrasting
train_10722
They first select a set of uncertainty samples and then randomly select balanced samples from the uncertainty-sample set.
the classifier used for selecting balanced samples is the same as the one for supervising uncertainty, which makes the balance control unreliable (the selected uncertainty samples take very low confidences which are unreliable to correctly predict the class label for controlling the balance).
contrasting
train_10723
We propose the weakly supervised Multi-Experts Model (MEM) for analyzing the semantic orientation of opinions expressed in natural language reviews.
to most prior work, MEM predicts both opinion polarity and opinion strength at the level of individual sentences; such fine-grained analysis helps to understand better why users like or dislike the entity under review.
contrasting
train_10724
Note that the bigram features in x i partially capture sentence similarity.
such features cannot be extended to longer subsequences such as trigrams due to data sparsity: useful features become as infrequent as noisy terms.
contrasting
train_10725
Similar to SO-CAL, we determine the prior polarity of a phrase based on the BoO dictionary.
to SO-CAL, we directly use the BoO score as a feature because the BoO predictor weights have been trained on a very large corpus and are thus reliable.
contrasting
train_10726
Many researchers have developed several algorithms for this purpose and generated large static lexicons of words marked with prior polarities (Hatzivassiloglou and McKeown, 1997;Turney et al., 2003;Esuli, 2008;Mohammad et al., 2009;Velikovich et al., 2010).
there exist some polarity-ambiguous words, which can dynamically reflect different polarities along with different contexts.
contrasting
train_10727
Thus some previous work turned to investigate its surrounding contexts' polarities (such as the sentences before or after s), and then assigned the majority polarity to the collocation c (Hatzivassiloglou and McKeown, 1997;Hu and Liu, 2004;Kanayama and Nasukawa, 2006).
since the amount of contexts from the original review is very limited, the final resulting polarity for the collocation c is insufficient to be reliable.
contrasting
train_10728
Overall, most of the above approaches aim to generate a large static polarity word lexicon marked with prior polarities.
it is not sensible to predict a word's sentiment orientation without considering its context.
contrasting
train_10729
This suggests that the contexts expanded from other reviews are helpful in disambiguating the collocation's polarity.
exp dataset is just effective in disambiguating the polarity of such a collocation c, which appears many times in the domain-related reviews.
contrasting
train_10730
This can result in the low accuracy of Strategy0.
we can find that the other three query expansion strategies perform well.
contrasting
train_10731
As a matter of fact, the same holds true for the task of detecting paraphrases.
to RTE, this latter task requires bi-directional entailments, i.e., each of the two phrases must entail the other.
contrasting
train_10732
The advantage of our method compared to offthe-shelf clustering techniques is two-fold: On the one hand, the clustering algorithm is free of any parameters, such as the number of clusters or a clustering threshold, that require fine-tuning.
the approach makes use of a termination criterion that very well represents the nature of the goal of our task, namely to align pairs of predicates across comparable texts.
contrasting
train_10733
Hence, the extracted sentence pairs do not always contain gold alignments.
even sentence pairs that contain gold alignments are generally less parallel than in the previous settings, which make them harder to align.
contrasting
train_10734
The increased difficulty can also be seen in the results for the Greedy baseline, which only achieves an F 1 -score of 20.1% in this setting.
we observe that the majority of all sure alignments (60.3%) can be retrieved by applying the LemmaId model.
contrasting
train_10735
(2000), Lapata (2003) propose a probabilistic model for finding the correct interpretation of such metonymies in an unsupervised manner.
these event type metonymies differ from the problem dealt with in our paper and the SemEval 2007 task in that their recognition (i.e.
contrasting
train_10736
Learning inference relations between verbs is at the heart of many semantic applications.
most prior work on learning such rules focused on a rather narrow set of information sources: mainly distributional similarity, and to a lesser extent manually constructed verb co-occurrence patterns.
contrasting
train_10737
This led to more precise rule extraction, but with poor coverage since contrary to nouns, in which patterns are common (Hearst, 1992), verbs do not co-occur often within rigid patterns.
verbs do tend to co-occur in the same document, and also in different clauses of the same sentence.
contrasting
train_10738
Stative verb, such as 'love' and 'think', usually describe a state that lasts some time.
event verbs, such as 'run' and 'kiss', describe an action.
contrasting
train_10739
Adding latent variables to these models gives us additional modeling power and have shown success in applications like POS tagging (Merialdo, 1994), speech recognition (Rabiner, 1989) and object recognition (Quattoni et al., 2004).
this comes at the cost that the resulting parameter estimation problem becomes non-convex and techniques like EM (Dempster et al., 1977) which are used to estimate the parameters can only lead to locally optimal solutions.
contrasting
train_10740
(2006) and Musillo and Merlo (2008) have shown that learning PCFGs and dependency grammars respectively with latent variables can produce parsers with very good generalization performance.
both these approaches rely on EM for parameter estimation and can benefit from using spectral methods.
contrasting
train_10741
When m = 1, this conditioning is analogous to TNG's word distribution.
in contrast with TNG, the word Figure 3: Illustration of the hierarchical Pitman-Yor process for a toy two-word vocabulary V = {honda, civic} and two-topic (T = 2) model with m = 1.
contrasting
train_10742
As such most prior work for learning SCFGs has relied on inference algorithms that were heuristically constrained or biased by word-based alignment models and small experiments (Wu, 1997;Zhang et al., 2008;Blunsom et al., 2009;Neubig et al., 2011).
to these previous attempts, our SCFG model scales to large datasets (over 1.3M sentence pairs) without imposing restrictions on the form of the grammar rules or otherwise constraining the set of learnable rules (e.g., with a word alignment).
contrasting
train_10743
A naive instantiation of this strategy is to visit all |s| 2 |t| 2 bispans in some order.
since we wish to be able to draw many samples, this is not computationally feasible.
contrasting
train_10744
A much more efficient approach avoids resampling variables that would result in violations without visiting each of them individually.
to ensure detailed balanced is maintained, the order that we resample bispans has to match the order we would sample them using any exhaustive approach.
contrasting
train_10745
It works particularly well in the sentence extraction paradigm.
additional elements are known to be good predictors of important information.
contrasting
train_10746
Assuming that their parent head is the main verb of the sentence, a longshort sequence would minimize overall dependency length.
in 613 examples found in the Penn Treebank, the average length of the first adjunct was 3.15 words while the second adjunct was 3.48 words long, thus reflecting a short-long pattern, as illustrated in the Temperley p.c.
contrasting
train_10747
Consequently, the score function, which denotes some criterion for the quality of a summary, tends to be determined so that the function can be decomposed to components and it is solved with global inference algorithms, such as ILP.
both decomposing the score function properly and utilizing the evaluation of half-way process of searches are generally difficult.
contrasting
train_10748
As insert i actions are dominant in the extractive approach, the execution time increases linearly with respect to the number of textual units.
iLP has to take into account the combinations of textual units, whose number increases exponentially.
contrasting
train_10749
For example, in one sentence, the probability of co-occurrence of "present → present", "past → past" and "future → present" is more than other combinations, which can be against tense inconsistency errors described in Observation (1) and (2) (see Section 1).
it seems strange that "present → past" exceeds "present → future".
contrasting
train_10750
Compared to IN2EN, all CN:* models have higher 2/3/4-gram precision.
cN:pivot has lower unigram precision, which could be due to bad word alignments, as the results for cN:pivot ′ show.
contrasting
train_10751
Table 6 shows that CN:pivot,t=0 is better/equal to the original 53%/31% of the time.
cN:pivot ′ +morph is typically worse than the original; even compared to the best in top 3, the better:worse ratio is 45%:43%.
contrasting
train_10752
We believe this is because lattice encodes many options, but does not use a Malay LM, while 1-best uses a Malay LM, but has to commit to 1-best.
cN:pivot uses both n-best outputs and an Indonesian LM; designing a similar setup for reversed adaptation is a research direction we would like to pursue in future work.
contrasting
train_10753
It is undeniable that a huge progress has been reached in the field of supervised dependency parsing, especially due to the CoNLL shared task series.
when it comes to unsupervised parsing, there are surprisingly few clues we could rely on.
contrasting
train_10754
(2011) showed that such projection produce better structures than the current unsupervised parsers do.
our task is different.
contrasting
train_10755
Other way that would lead to lower sparsity would be searching for sequences of part-of-speech tags instead of sequences of word forms.
this also does not bring desired results.
contrasting
train_10756
While the first method generates only flat trees, the second one can generate all possible projective trees.
the sampler converges to similar results for both the initializations.
contrasting
train_10757
Our main motivation for developing an unsupervised dependency parser was that we wanted to be able to parse any language.
the experiments show that our parser fails for some languages.
contrasting
train_10758
Graph-based parsers (Eisner, 1996;McDonald et al., 2005) are based on global optimization of models that work by scoring subtrees.
transition-based parsers (Yamada and Matsumoto, 2003;Nivre et al., 2004), which are the focus of this work, use local training to make greedy decisions that deterministically select the next parser state.
contrasting
train_10759
In that paper, the LEFT-ARC transition from Nivre's arc-eager transition system is added to a list-based parser.
the goal of that transition is different from ours (selecting between projective and nonprojective parsing, rather than building some arcs in advance) and the approach is specific to one algorithm while ours is generic -for example, the LEFT-ARC transition cannot be added to the arc-standard and arc-eager parsers, or to extensions of those like the ones by Attardi (2006) or Nivre (2009), because these already have it.
contrasting
train_10760
Non-projective transitions that create dependency arcs between non-contiguous nodes have been used in the transition-based parser by Attardi (2006).
the transitions in that parser do not use the second buffer node, since they are not intended to create some arcs in advance.
contrasting
train_10761
Each of these higher-order parsing algorithms makes a clever factorization for the specific model in consideration to keep complexity as low as possible.
this results in a loss of generality.
contrasting
train_10762
Thus, the addition of new higher-order features, including valency, extra thirdorder, and label tuple features, results in increased accuracy.
this is not without cost as the run-time in terms of tokens/sec decreases (300 to 220).
contrasting
train_10763
For example, /m/ is a nasal (air flowing through the nostrils), while /p/ is a plosive (obstructed air suddenly released through the mouth).
vowels are voiced sounds produced with an open vocal tract.
contrasting
train_10764
This result is somewhat surprising, as we expected these features to be quite informative.
it appears that whatever information they convey is redundant when considering the text-based feature sets.
contrasting
train_10765
First, we notice that the average entropy of voiced vs. unvoiced consonants is nearly identical in both cases, close to the optimal value.
when we examine the dimensions of place and manner, we notice that the entropy induced by our model is not as high as that of the true consonant inventories, implying a suboptimal allocation of consonants.
contrasting
train_10766
Collecting all such links within English Wikipedia yields a large number of aliases for each page.
many redirects are for topics other than individual people, and these would be poor examples of name variation.
contrasting
train_10767
Our method is really intended to be run on a corpus of string tokens.
for experimental purposes, we instead use the above dataset of string types because this allows us to use the "ground truth" given by the Wikipedia redirects.
contrasting
train_10768
The other way around (rejecting what should be allowed) is easier to check, and we find that of 13K word types in AMI, about 7.2% are rejected for non-appearance in Gigaword, after filtering for interjections like "mm-hmm".
we manually checked them and returned all but 2.9% of word types to the corpus.
contrasting
train_10769
Our result points a way towards a direction for explaining of this phenomenon by demonstrating that the differences between currenttechnology artificial speech and natural speech can be partially explained through higher-level syntactic features.
further experimentation is required on other measures of syntactic complexity (e.g.
contrasting
train_10770
This setprovided by Yahoo Japan Corporation and contains 16 million questions asked from April, 2004to April 2009 ting may not necessarily reflect a "real world" distribution of why-questions, in which ideally a wide range of people ask questions that may or may not have an answer in our corpus.
qS3 allows us to evaluate our method under the idealized conditions where we have a perfect answer retrieval module whose answer candidates always contain at least one correct answer (the source passage used for creating the why-question).
contrasting
train_10771
(2007) used lexical-conceptual templates for query generation.
this work did not address the crucial issue of disambiguating the constituents of the question.
contrasting
train_10772
In statistical machine translation, minimum error rate training (MERT) is a standard method for tuning a single weight with regard to a given development data.
due to the diversity and uneven distribution of source sentences, there are two problems suffered by this method.
contrasting
train_10773
Since the transformed classification problem is not linearly separable, there does not exist a single weight which can obtain e 11 and e 21 as translation results meanwhile.
one can obtain e 11 and e 21 with weights: 1, 1 and −1, 1 , respectively.
contrasting
train_10774
Therefore, there exists no single weight W which simultaneously obtains e 11 and e 21 as translation for f 1 and f 2 via Equation (1).
we can achieve this with two weights: 1, 1 for f 1 and −1, 1 for f 2 .
contrasting
train_10775
This method can automatically adapt the divergence between different annotation guidelines and bring improvement to Chi-nese word segmentation.
the need of cascaded classification decisions makes it less practical for tasks of high computational complexity such as parsing, and less efficient to incorporate more than two annotated corpora.
contrasting
train_10776
In decoding, a raw sentence is first decoded by the source classifier, and then inputted into the transformation classifier together with the annotations given by the source classifier, so as to obtain an improved classification result.
annotation adaptation has a drawback, it has to cascade two classifiers in decoding to integrate the knowledge in two corpora, thus seriously degrades the processing speed.
contrasting
train_10777
Considering the fact that today some corpora for word segmentation are really large (usually tens of thousands of sentences), it is necessary to obtain the latest CTB and investigate whether and how much does annotation transformation bring improvement to a much higher baseline.
it is valuable to conduct experiments with more source-annotated training data, such as the SIGHAN dataset, to investigate the trend of improvement along with the increment of the additional annotated sentences.
contrasting
train_10778
They approach lexical variant detection by using a context fitness classifier (Han and Baldwin, 2011) or through dictionary lookup (Gouws et al., 2011).
the lexical variant detection of both methods is rather unreliable, indicating the challenge of this aspect of normalisation.
contrasting
train_10779
Our method adopts a similar strategy using distributional/string similarity, but instead of constructing a small lexicon for preprocessing, we build a much wider-coverage normalisation dictionary and opt for a fully lexiconbased end-to-end normalisation approach.
to the normalisation dictionaries of Han and Baldwin (2011) and Gouws et al.
contrasting
train_10780
This result seems rather discouraging.
considering that S-dict is an automatically-constructed dictionary targeting lexical variants of varying frequency, it is not surprising that the precision is worse than that of HB-dictwhich is manually-constructed -and GHM-dictwhich includes entries only for more-frequent OOVs for which distributional similarity is more accurate.
contrasting
train_10781
Their approach works well when each sentence potentially refers to one of a small set of possible meanings, such as in the sportscasting task.
it does not scale to problems with a large set of potential meanings for each sentence, such as the navigation instruction following task studied by Chen and Mooney (2011).
contrasting
train_10782
assume that every atomic MR generates at least one NL word.
since we do not know which subgraph of the overall context (i.e.
contrasting
train_10783
(2011) simply read the MR, m, for a sentence off the top S m nonterminal of the most probable parse tree.
in our approach, the correct MR is constructed by properly composing the appropriate subset of lexeme MRs from the most-probable parse tree.
contrasting
train_10784
(2010) developed a system that learns to map NL instructions to executable commands for a robot navigating in an environment constructed by a laser range finder.
their approach has limitations of ignoring any objects or other landmarks in the environment to which the instructions can refer.
contrasting
train_10785
Most of today's SMT systems depends heavily on parallel corpora aligned at the word-level to train their different component models.
such annotations do have their drawbacks in training.
contrasting
train_10786
from free text, has received renewed interest in the "big data" era, when petabytes of natural-language text containing thousands of different structure types are readily available.
traditional supervised methods are unlikely to scale in this context, as training data is either limited or nonexistent for most of these structures.
contrasting
train_10787
Since then, the approach grew in popularity (Bunescu and Mooney, 2007;Bellare and McCallum, 2007;Wu and Weld, 2007;Mintz et al., 2009;Riedel et al., 2010;Hoffmann et al., 2011;Nguyen and Moschitti, 2011;Sun et al., 2011;Surdeanu et al., 2011a).
most of these approaches make one or more approximations in learning.
contrasting
train_10788
Thus the log-likelihood for this problem is not convex (it includes a sum of products).
we can still use EM, but the optimization focuses on maximizing the lower bound of the loglikelihood, i.e., we maximize the above joint probability for each entity pair in the database.
contrasting
train_10789
It focused on higher textual dimensions, such as inference load (Kintsch and Vipond, 1979;Kemper, 1983), density of concepts (Kintsch and Vipond, 1979), or macrostructure (Meyer, 1982).
these attempts did not achieve better results than the classic approach, even though they used more principled and more complex features.
contrasting
train_10790
Since our study intended to design a generic model, we focused on specific predictors affecting L2 reading, whatever the learner's mother tongue is: Multi-word expressions (MWE): MWEs are acknowledged to cause problems to L2 learners for production (Bahns and Eldaw, 1993).
the effect of MWE on the reception side remains unclear, especially for beginners.
contrasting
train_10791
First, we showed that maximizing the type of linguistic information might not be the best path to go, since a model based on four lexico-syntactic features yielded predictions as accurate as those of a model relying on our Exp1 set of variables.
this finding might be partly accounted by the lower predictive power of the features from the semantic and specific-to-FFL family, with the notable exception of the LSA-based predictor (avLocalLsa-Lem), which is the third best predictor when considered alone.
contrasting
train_10792
Not only the expert models, to which we imposed the presence of one or two semantic predictors, did not perform the best, but none of the features from our semantic set was retained during the automatic selection of the variables for the logistic models.
in some subsets, the LSA-based feature was sometimes considered as collinear with the other variables.
contrasting
train_10793
In case (c), it is possible for there to be multiple children that inherit this newly created gap if multiple children had descendents on both sides.
the assumption of upward movement in the phrase structure tree should rule out movement into the projection interval of a non-ancestor.
contrasting
train_10794
In (1), the coreference relation between One of the key suspected Mafia bosses arrested yesterday and Lo Presti can be found by knowing that their predicates (i.e., has hanged and had hanged) corefer.
the coreference relations between the arguments Saints and Bush in (2) helps to determine the coreference relation between their predicates placed and put.
contrasting
train_10795
Our model follows a cautious (or "baby steps") approach, which we previously showed to be successful for entity coreference resolution (Raghunathan et al., 2010;Lee et al., 2011).
unlike our previous work, which used deterministic rules, in this paper we learn a coreference resolution model using linear regression.
contrasting
train_10796
Note that this avoids inspecting many of the possible cluster combinations: once a cluster is built (e.g., during the previous iterations or by the deterministic sieves in step 8), we do not generate training data from its members, but rather treat it as an atomic unit.
our approach generates more training data than online learning, which trains using only the actual decisions taken during inference in each iteration (i.e., the pair (e 1 , e 2 ) in step 13).
contrasting
train_10797
The performances when the phrase length is 1 are better than that of single word-based TM (row 6 and row 13 in Table 2), this suspect that the features in our linear ranking model are useful.
it will be instructive to explore the methods of preserving the improvement generated by longer phrase when more features are incorporated in the future work.
contrasting
train_10798
While traditional Information Extraction (IE) (ARPA, 1991;ARPA, 1998) focused on identifying and extracting specific relations of interest, there has been great interest in scaling IE to a broader set of relations and to far larger corpora (Banko et al., 2007;Hoffmann et al., 2010;Mintz et al., 2009;Carlson et al., 2010;.
the requirement of having pre-specified relations of interest is a significant obstacle.
contrasting
train_10799
To apply pattern #1 from Figure 3 we first match arg1 to 'festival', rel to 'scheduled' and arg2 to '25th' with prep 'for'.
(festival, be scheduled for, 25th) is not a very meaningful extraction.
contrasting