id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_8800
The independent classifier approach has the advantage of a simple model structure with a search space for tagging of O(|T |n).
while Liang et al.
contrasting
train_8801
For language processing applications, unsupervised learning of morphology can provide decentquality analyses without resources produced by human experts.
while morphological analyzers and large annotated corpora may be expensive to obtain, a small amount of linguistic expertise is more easily available.
contrasting
train_8802
(2013) outperforms FlatCat for Finnish and reaches the same level for English.
we show that a discriminative model such as CRF gives inconsistent segmentations that do not work as well in a practical application: In English and Finnish information retrieval tasks, FlatCat clearly outperformed the CRF-based segmentation.
contrasting
train_8803
In addition, case elements are basically placed in the order of a nominative, a dative and an accusative.
the basic order of case elements is often changed by being influenced from grammatical and discourse factors.
contrasting
train_8804
Since there are a huge number of the structures S = ⟨O, D⟩ which are theoretically possible for an input sentence B, an efficient algorithm is desired.
since O and D are dependent on each other, it is difficult to find the optimal structure efficiently.
contrasting
train_8805
Second, the candidate 3 is generated by the concatenating process in ConcatReorder(M 2,3 , M 4,4 ).
the reordering process in ConcatReorder(M 2,3 , M 4,4 ) generates no candidates because b 4 has no child in M 4,4 .
contrasting
train_8806
We can see that our method can achieve the complicated word reordering.
figure 5 shows an example of sentences incorrectly reordered and parsed by our method.
contrasting
train_8807
For instance, the date of Haiti earthquake considers the earthquake itself as the main event.
related events such as the sorrow expression of UN Secretary General also happen immediately after the earthquake but still in the same date.
contrasting
train_8808
(2012) uses a small, hand-coded grammar to describe a sparse set of prescribed activities.
we utilize corpus statistics to aid the description of a wide range of naturally-occurring videos.
contrasting
train_8809
However, all these conclusions came from training an SVM classifier in only one fixed topic.
in our paper, we draw our conclusions from all possible training/testing combinations rather than fixing in advance the training topic.
contrasting
train_8810
This cost of $90 for Doodling consists of hosting and bandwidth charges incurred for two virtual servers running on a commercial cloud platform.
once we scale Doodling up to permit more users and higher productivity, we expect the costs to remain fixed, whereas MT costs will scale proportionally to the productivity at US$0.10 per source.
contrasting
train_8811
The drawer scribbling answers to the canvas is a most obvious form of cheating, which may require sophisticated image recognition algorithms to weed out automatically.
we opted for a low-cost approach of allowing any guesser to mark a certain game round as cheating, if they find the drawer scribbling on the canvas.
contrasting
train_8812
Moreover, example (4) can be an answer to example (5).
those text pieces share neither any lexical root (make vs. ingredient) nor any syntactic structure (X predicate Y vs. predicate preposition (of) X linking-verb (is) Y) nor part-of-speech (verb vs. noun), so recognizing them as a textual entailment relation is harder than between examples (1) and (2).
contrasting
train_8813
Therefore, after tagging partsof-speech to every word in the glosses, regular expressions are used to capture verbs.
texts from DBpedia are composed of several sentences and contain comparatively large numbers of verbs.
contrasting
train_8814
We also compared the performances of PreDic to the performances of WordNet.
getting similar verbs of PreDic from WordNet was hard because WordNet does not directly provide verbs for a noun unless the noun itself also has a verb form.
contrasting
train_8815
Table 6 shows that PreDic is the best at recall for both micro average (i.e., 0.43) and macro average (i.e., 0.42), while NOMLEX is best at precision for both micro average (i.e., 0.77) and macro average (i.e., 0.48).
the large difference in precision for NOMLEX between micro and macro average (i.e., 29 percent) shows that NOMLEX performs well on some nouns but not on other nouns.
contrasting
train_8816
However, the large difference in precision for NOMLEX between micro and macro average (i.e., 29 percent) shows that NOMLEX performs well on some nouns but not on other nouns.
preDic provides not only the better recall and broader coverage than NOMLEX and WordNet, but also competitive macro average precision (i.e., 0.39 vs. 0.41) compared to NOMLEX or even better micro average precision (i.e., 0.60 vs. 0.52) compared to WordNet.
contrasting
train_8817
Many researchers have suggested effective approaches for verb entailment acquisition and built valuable lexical resources with which the variability of natural language expression can be understood more systematically.
unsupervised verb inference from nouns that can deliver similar meaning without shared roots has not been explicitly addressed so far.
contrasting
train_8818
Anchor accuracy of new intra-language links could not be calculated because of the unavailability of gold standard data.
the proposed method specifies the anchor for destination article only when possible anchors for it is found in the target-language article .
contrasting
train_8819
Both methods and our proposed method exploit the comparability between intra-language links in different language editions.
while the former find new ILLs, the latter finds new intra-language links.
contrasting
train_8820
Collected messages are dated from Jun 21, 2008 to Nov 7, 2009, and all of them are in Traditional Chinese.
the literal meanings of the posted messages need to be known.
contrasting
train_8821
We argue that these patterns are more static than the others, and we call them the customary patterns.
the patterns in Sections 4.1, 4.2 and 4.3 are called noncustomary patterns.
contrasting
train_8822
Since the Plurk platform can be used as an instant messaging system, and readers of the message are usually on the author's friend list, these messages are usually conversational.
ya-hoo blogs are not limited in length and a blog article itself is not part of the conversation.
contrasting
train_8823
For example, "太強" (so strong) is listed as a positive term in NTUSD.
it is used to indicate a negative condition in the example (s7).
contrasting
train_8824
Furthermore, several annotation efforts have been devoted to developing resources for different languages, needed for supervised learning (Hajič et al., 2009).
there is still a large number of languages for which corpora with semantic annotations do not exist.
contrasting
train_8825
The ideas behind their cross-lingual model adaptation resemble the ideas behind our global method for semantic role labelling.
in contrast to their work we do not consider the predicate labelling as given because, as manual annotations show (van der Plas et al., 2010), this task is not trivial.
contrasting
train_8826
This strong correlation between syntactic labels and semantic role labels in the PropBank annotation has been shown in detail by Merlo and Van der Plas (2009).
to previous work on monolingual unsupervised semantic role induction, we add the predicate label as a predictor.
contrasting
train_8827
We show that the combination of direct transfer (a high-precision method) and global methods (high in recall) outperforms previous results.
to previous work, we transfer predicate annotations and semantic role annotations by building two separate models tailored to the task at hand.
contrasting
train_8828
One approach for building such a multilingual semantic parsing system is to develop a joint generative process from which both the semantic representations and the sentences in different languages are generated simultaneously.
building such a joint model is non-trivial.
contrasting
train_8829
We can see from the results presented in Table 3 and Table 4 that, in general, the performance of the multilingual semantic parser tends to improve as the number of input languages increases.
this is not always the case.
contrasting
train_8830
der words which are essential for reliable clustering of the first order words.
the large number of occurences and a large vocabulary make it intractable to run LDA using the original frequency of the second order words.
contrasting
train_8831
In previous self-training algorithms, the learner tries to convert the most confidently predicted unlabeled examples of each class into labeled training examples.
they evaluate the confidence of an instance only based on the individual evidence from the instance.
contrasting
train_8832
The straightforward way is to evaluate each instance with their linked confidence P L () from the classifier.
it oversimplifies the data dependence and does not make use of the correlated characteristics.
contrasting
train_8833
This conclusion for SA was opposite to TTC, so tp was preferred in subsequent SA research.
to local weight, global weight depends on the whole document collection.
contrasting
train_8834
The only difference between them lies in the quantification of the imbalance of a term's distribution.
existing methods more or less suffer from the problem of overweighting.
contrasting
train_8835
However, labeling reviews is often difficult, expensive or time consuming (Chapelle et al., 2006).
it is much easier to obtain a large number of unlabeled reviews, such as the growing availability and popularity of online review sites and personal blogs (Pang and Lee, 2008).
contrasting
train_8836
There are several works have been done in semi-supervised learning for sentiment classification, and get competitive performance (Li et al., 2010;Dasgupta and Ng, 2009;.
most of the existing semi-supervised learning methods are still far from satisfactory.
contrasting
train_8837
The model does not suffer from the label-bias problem as does the Maximum Entropy model, and its parameter estimation is well behaved with the help of a convex loss function.
the convexity of the loss function of the CRF model does not hold anymore when there are unobserved data or latent variables (Sutton and McCallum, 2007).
contrasting
train_8838
2011proposed eight rules to describe these relations.
their work only focused on English sentences, whereas the relations for Chinese sentences are different.
contrasting
train_8839
One of the main differences between a sentiment sentence and a formal sentence is that the former often contains polarity words.
to the features of feeling(•), polarity words (e.g., "great" in the sentence "Overall, this is a great camera") tend to be retained, because they are important and special to sentiment analysis.
contrasting
train_8840
Moreover, we can observe that the idea of sentence compression and our Sent Comp are useful for all the four product domains on T-P collocation extraction task, indicating that Sent Comp is domain adaptive.
we can find a small gap between auto Comp and manual Comp, which indicates that the Sent Comp model can still be improved further.
contrasting
train_8841
By an additional interpretation of the templates of these yield functions in the algebra of dependency trees (with the overt lexical items as roots), the LCFRS generates both strings and (possibly non-projective) dependency structures.
the running time of LCFRS parsers is generally very high, still polynomial in the sentence length, but with a degree determined by properties of the grammar; difficulties involved in running LCFRS parsers for natural languages are described by (Kallmeyer and Maier, 2013).
contrasting
train_8842
c B(c B(a A( ) b) d) d. We couple derivations in two grammars in a way similar to how this is commonly done for synchronous grammars, namely by indexed symbols.
we apply the mechanism not only to derivational nonterminals but also to terminals.
contrasting
train_8843
The highest score from the top six relations is achieved by taking words exclusively from the second-order secondary object (OBJ2) relation.
relatively few word types are included in the clusters.
contrasting
train_8844
numerals and determiners were simply parts of an NP but had no distinct labeling, like [NP azöt [ADJP fekete] kutya] (the five black dog) "the five black dogs"), but it was necessary to assign them a dependency label and a parent node during conversion.
in some cases it was not straightforward which modifier modifies which parent node: for instance, in [NP nem [ADJP megfelelő] módszerek] (not appropriate methods) "inappropriate methods", the negation word nem is erroneously attached to the noun instead of the adjective in the converted phrase.
contrasting
train_8845
For instance, coordination and multiple modifiers are among the most frequent sources of errors in both cases as for the error rates are concerned.
with regard to the absolute numbers, we can see that both error types are reduced when the gold standard dataset is used for training.
contrasting
train_8846
In formal terms that we need for the outline of the transduction below, a SSyntS is defined as follows: Definition 1 (SSyntS) An SSyntS of a language L is a quintuple T SS = N, A, λ ls→n , ρ rs→a , γ n→g defined over all lexical items L of L, the set of syntactic grammemes G synt , and the set of grammatical functions R gr , where • the set N of nodes and the set A of directed arcs form a connected tree, • λ ls→n assigns to each n ∈ N an l s ∈ L, • ρ rs→a assigns to each a ∈ A an r ∈ R gr , and The features of the node labels in DSyntSs as worked with in this paper are lex dsynt and "semantic grammemes" of the value of lex dsynt , i.e., number and determination for nouns and tense, aspect, mood and voice for verbs.
2 to lex ssynt in SSyntS, DSyntS's lex dsynt can be any full, but not a functional lexeme.
contrasting
train_8847
If required, a SSyntS-DSyntS structure pair can be also mapped to a pure predicate-argument graph such as the DELPH-IN structure (Oepen, 2002) or to an approximation thereof (as the Enju conversion (Miyao, 2006), which keeps functional nodes), to an DRS (Kamp and Reyle, 1993), or to a PropBank structure.
dSyntS-treebanks can be used for automatic extraction of deep grammars.
contrasting
train_8848
From WordNet (Fellbaum, 1998;Bond et al., 2009), we can derive entailment and contradiction relations using synsets and synset-links that represent relations such as 'troponym', 'antonym' and 'entailment'.
happens-before and anomalous obstruction relations cannot be derived from it, since there is no information on temporal ordering except that on causality.
contrasting
train_8849
In comparison to the n-gram baselines, only the parser by Seginer yields a higher score for frequent words and 1M sentences training in Setup A.
the difference is very small and is confirmed on the 10M sentences only in comparison to the Trigram baseline.
contrasting
train_8850
This means that for those particular tasks, disambiguation plays an important role.
this is not the case for the noun-noun pairs.
contrasting
train_8851
We introduce this hypothesis producer because the baseline system tends to miss long edit regions, especially when very few words in the region are repeated.
sometimes a speaker does change what he intends to say by aborting a sentence so that only the beginning few words are repeated, as in the above sentence.
contrasting
train_8852
Our preliminary experiments show that by increasing the clique order of features while reducing the number of labels (keeping about the same total number of parameters), we can maintain the same performance.
training takes a longer time.
contrasting
train_8853
This does not solve the problem entirely, since we know that named entities are not the only interesting nuggets -general terms and concepts can also be of interest to a reader.
we do have reason to believe that entities play a very prominent role in web content consumption, based on the frequency with which entities are searched for (see, for example Lin et al.
contrasting
train_8854
CDCD can be seen as a sub-task in the emerging wider field of argumentation mining that involves identifying argumentative structures within a document, as well as their potential relations (Mochales Palau and Moens, 2009;Cabrio and Villata, 2012;Wyner et al., 2012).
cDcD has several distinctive key features.
contrasting
train_8855
However, since CDCs often correspond to much smaller parts of their surrounding sentence, considering the scores of all previous components is more effective.
to the components described above, for which the training set is fully defined by the labeled data, the Ranking Component needs be trained also on the output of its "upstream" components, since it relies on the scores produced by these components.
contrasting
train_8856
Thus, it is possible to derive argumentative relations between components though they are not explicitly included.
to our work, the corpus consists of several text genres including newspaper editorials, parliamentary records, judicial summaries and discussion boards.
contrasting
train_8857
Therefore, they annotate a pair of arguments as either entailment or not.
to our work, the approach models relationships between pairs of arguments and does not consider components of individual arguments.
contrasting
train_8858
However, the approach is tailored to product reviews, and the work does not provide an inter-rater agreement study.
to previous work, our annotation scheme includes argument components and argumentative relations.
contrasting
train_8859
In this example, both components cover a complete sentence.
a sentence can also contain several argument components like in example (5).
contrasting
train_8860
Parallel treebanks are not something new.
most of the existing parallel treebanks (Li et al., 2012;Megyesi et al., 2010) do not have phrase alignments.
contrasting
train_8861
The holder of the epistemic modality in example 6 is not the Twitter user, either.
the Twitter user is not quoting anyone here, but is rather making an assumption about what the Egyptian National Party holds as TRUE.
contrasting
train_8862
Edges are not drawn between different nodes for the same mention.
they are drawn between two entities when there is a relation between them.
contrasting
train_8863
Experiments show that overall using the R m combining scheme is better than the R s scheme.
the highest rank, after combining graph rank score and initial confidence score, is not always correct.
contrasting
train_8864
That explains the little improvement over basic PR when using the initial confidence as an initial rank before using PR (see Table 1).
when comparing PR results in Tables 2 and 1, we can see that the PR algorithm is more sensitive to the links than to initial ranks.
contrasting
train_8865
Straightforward cases like these were used to justify the simple aggregation methods used by all TSF systems to date (Surdeanu, 2013;.
in reality even humans often must deal with vague and/or conflicting temporal information across documents, and systems must furthermore deal with the fact that each of their temporal relationship classifications is potentially false.
contrasting
train_8866
Each RNP holds at the DCT, and "Wednesday", as well as the day before that (the VTOP of "pardoned").
as for VTOP's further into the past, whether the post-relational state holds is less clear.
contrasting
train_8867
In addition, previous truth finding work assumed most claims are likely to be true.
most SF systems have hit a performance ceiling of 35% F-measure, and false responses constitute the majority class (72.02%) due to the imperfect algorithms as well as the inconsistencies of information sources.
contrasting
train_8868
We have discussed semantic spaces of relation expressions and the common semantic space as if to define what constitutes a relation expression is straightforward.
it is not trivial to define what constitutes a relation expression.
contrasting
train_8869
One can easily see that the common space successfully moved down many ambiguous expressions such as {compra} and {nor} in {announce acquisition}, and {would say} and{,} in {president ,}.
some relation expressions which are specific and semantically similar to the chosen ones moved up in the rank, for example {'s purchase} and {chief ,}.
contrasting
train_8870
This approach got the lowest results during the SemEval-2013 Task 12 evaluation due to a bug in the system.
the correct implementation achieves 0.583 of F-measure for English and 0.528 for Italian.
contrasting
train_8871
Previous WSI evaluations in SemEval (Agirre and Soroa, 2007;Manandhar et al., 2010) have approached sense induction in terms of finding the single most salient sense of a target word given its context.
as shown in Erk and McCarthy (2009), multiple senses of the target word may be perceived by readers from different angles and a graded notion of sense labeling may be considered as the most appropriate.
contrasting
train_8872
Due to the unsupervised nature of the task, participants were not provided with sense-labeled training data.
wSI systems were provided with the ukwac corpus (Baroni et al., 2009) to use in inducing senses.
contrasting
train_8873
and finance corpora (Koeling et al., 2005) and the BNC.
in these experiments we observed very high Novelty Ratio for many distractors (selected in a similar way to our other experiments).
contrasting
train_8874
The generalisation error is given by and approximated by its empirical counterpart on a finite m-sample of pairs where G i is a word graph and p i the best summarising sentence.
minimising the empirical risk directly leads to an ill-posed optimisation problem as there generally exist many indistinguishable but equally well solutions realising an empirical loss of zero.
contrasting
train_8875
For small training sets, the structural support vector machine performs only slightly better than the unweighted application of Yen's algorithm and is clearly outperformed by the unsupervised baselines.
the SVM improves by a generalised, loss-augmented shortest path algorithm that can be solved by an integer linear program in polynomial time.
contrasting
train_8876
This deficiency is problematic because in practical usage the maximum length of a summary is specified by the user; hence, the summarizer should be able to control output length.
to their method, our approach naturally takes the maximum summary length into account when summarizing a document.
contrasting
train_8877
The words in the document leads were likely to be important, and LEAD drew on this property.
as we mentioned later, it sacrificed the linguistic quality to achieve the high ROUGE score.
contrasting
train_8878
At first glance, we can mitigate this problem using distant supervision approaches.
there is difficulty in applying these approaches to MR classification: only one of the relation types defined in the 2010 i2b2 Challenge is represented in the Unified Medical Language System 1 , the most comprehensive medical ontology available to date.
contrasting
train_8879
whereĈ t+1 alt represents the estimated CPD at t + 1.Ĉ t alt represents the estimated CPD at t. The other variables are the same as those in Equation (2).
we assume that the system can observe G user and S alt .
contrasting
train_8880
G user is not usually observable because traditional dialogue systems have automatic speech recognition/Spoken language understanding errors.
in this work, we use Wizard of Oz in place of automatic speech recognition/Spoken language understanding (Section 6.2).
contrasting
train_8881
Particularly, both of the learned policies better achieve user satisfaction than Random.
only Framing is able to achieve better persuasion success than Random.
contrasting
train_8882
Note that in the previous section, Sat user and P S sys are estimated from the simulated dialogue.
to the previous section, Sat user and P S sys are calculated from the result of the real user's questionnaire But, A has extremely good performance.
contrasting
train_8883
While there is one previous example of persuasive dialogue using framing (Mazzotta et al., 2007), this system does not use an automatically learned policy, relying on handcrafted rules.
in our research, we apply reinforcement learning to learn the system policy automatically.
contrasting
train_8884
Language, as the primary form of human expression, is certainly critical.
analyzing meaning may require going beyond linguistic inference, depending on the context or application.
contrasting
train_8885
Frequently, their meanings are also systematically related.
there are also many examples of derivationally related lemma pairs whose meanings differ substantially, e.g., object N-objective N. Most broad-coverage derivational lexicons do not reflect this distinction, mixing up semantically related and unrelated word pairs.
contrasting
train_8886
There are almost no compound errors C, which is not surprising given the rule-based construction of the lexicon, and only a relatively small number (about 5%) of lemmatization errors L, which fall outside the scope of our work.
both N and M occur with substantial frequency: Each class accounts for around 10% of the pairs.
contrasting
train_8887
The advantage of the string transformation-based construction of DERIVBASE is its ability to include infrequent lemmas in the lexicon, and in fact DERIVBASE includes more than 250,000 content lemmas, some of which occur not more than three times in SDeWaC.
this is a potential problem when we build distributional representations for all lemmas in DERIVBASE since it is known from the literature that similarity predictions for infrequent lemmas are often unreliable (Bullinaria and Levy, 2007).
contrasting
train_8888
For example, classic dance is annotated as correct out of context because one could imagine using it in a context where it would denote some typical dance like: (13) They performed a classic Ceilidh dance.
in practice, the AN classical dance is used much more frequently, and classic dance is most often errorful in context, as in (4) above.
contrasting
train_8889
Since the ANs in these pairs are semantically similar, the features based on their semantic representations might not be discriminative enough.
the classifier is more effective in detecting errors in cases where the original AN and its correction are only similar in form, or not related to each other.
contrasting
train_8890
If we can generate sufficiently appropriate rules, these approaches seem to be effective.
there are many types of derivational patterns in SNS text and it is difficult to cover all of them by hand.
contrasting
train_8891
This enables our system to perform well even when the number of candidates increases.
several studies have applied a statistical approach.
contrasting
train_8892
In these studies it was assumed that clear word segmentations existed.
since Japanese is an unsegmented language the normalization problem needs to be treated as a joint normalization, word segmentation, and POS tagging problem.
contrasting
train_8893
Example Here is an example of a tweet that contains a URL: (1) #Localization #job: Supplier / Project Manager -Localisation Vendor -NY, NY, United States http://bit.ly/16KigBg #nlppeople The words in the tweet are all common words, but they occur without linguistic context that could help a tagging model to infer whether these words are nouns, verbs, named entities, etc.
on the website that the tweet refers to, all of these words occur in context: (2) The Supplier/Project Manager performs the selection and maintenance .
contrasting
train_8894
First of all, naive self-training does not work: accuracy declines or is just around baseline performance (Table 2 and Figure 3).
our augmented self-training methods with WEB or DICT reach large improvements.
contrasting
train_8895
Unsupervised classbased language models such as Random Forest LM (Xu and Jelinek, 2007), Model M (Chen, 2008) have been investigated that outperform a word-based LM.
the long-distance information is captured by using a cache-based LM that takes advantage of the fact that a word observed earlier in a document could occur again.
contrasting
train_8896
From is statistically significant to the class-based LM (Brown et al., 1992) and DCLM (Chien and Chueh, 2011) at a significance level of 0.01 and 0.05 respectively.
the IDCLM (L = 3) model is statistically significant to the above models at a significance level of 0.01.
contrasting
train_8897
Regarding different classes of uncertainty, we should mention that while weasels constitute the most frequent cue category in Wikipedia texts, they occur less frequently in the news corpus.
doxastic cues are frequent in the news corpus but in Wikipedia texts, their number is considerably smaller.
contrasting
train_8898
One possibility would be to use a specific label from the existing set of dependency relations, for example 'mwe'.
one-to-many alignments do not always refer to proper multi-word expressions but often represent other grammatical or structural differences like the relation between the English preposition 'of' which is linked together with the determiner 'the' to the German determiner 'der' in sentences like 'Resumption OF THE session' translated to German 'Wiederaufnahme DER Sitzung'.
contrasting
train_8899
Somewhat surprisingly we can see that the recall-oriented alignment heuristics (grow-diag-final-and) actually perform quite well in many cases, leading either to the best performing model or to one that is very close to the best result.
in some cases, these models fall behind the ones based on alignment intersections (for instance Spanish-English) or directional word alignments (for example for Spanish-German, French-English, Swedish-German).
contrasting