id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_2500
So far the arguments in D π have been semantically annotated with the Wikipedia pages they refer to.
using Wikipedia as our sense inventory is not desirable; in fact, contrarily to other commonly used lexical-semantic networks such as WordNet, Wikipedia is not formally organized in a structured, taxonomic hierarchy.
contrasting
train_2501
As a result of the probabilistic association of each semantic class c with a target lexical predicate Once the semantic predicates for the input lexical predicate π have been learned, we can classify a new filling argument a of π.
the class probabilities calculated with Formula 1 might not provide reliable scores for several classes, including unseen ones whose probability would be 0.
contrasting
train_2502
The availability of Web-scale corpora has led to the production of large resources of relations (Etzioni et al., 2005;Yates et al., 2007;Wu and Weld, 2010;Carlson et al., 2010;Fader et al., 2011).
these resources often operate purely at the lexical level, providing no information on the semantics of their arguments or relations.
contrasting
train_2503
As a result, alternative approaches have been proposed that eschew taxonomies in favor of rating the quality of potential relation arguments (Erk, 2007;Chambers and Jurafsky, 2010) or generating probability distributions over the arguments (Rooth et al., 1999;Pantel et al., 2007;Bergsma et al., 2008;Ritter et al., 2010;Séaghdha, 2010;Bouma, 2010;Jang and Mostow, 2012) in order to obtain higher coverage of preferences.
we overcome the data sparsity of class-based models by leveraging the large quantity of collaboratively-annotated Wikipedia text in order to connect predicate arguments with their semantic class in WordNet using BabelNet (Navigli and Ponzetto, 2012); because we map directly to WordNet synsets, we provide a more readilyinterpretable collocation preference model than most similarity-based or probabilistic models.
contrasting
train_2504
We will show that they cannot be reconstructed solely from the source text, extending Copeck and Szpakowicz (2004)'s result to caseframes.
we also show that if articles from the same domain are added, reconstruction then becomes possible.
contrasting
train_2505
The introduction of nonterminals in the SCFG rules provides some degree of generalization.
as mentioned earlier, the context-free assumption ingrained in the syntax-based formalism often limits the model's ability to influence global reordering decision that involves cross-boundary contexts.
contrasting
train_2506
A straightforward approach for this is to use a supervised aligner to align the words in the sentences and then derive the reference reordering as we do for manual word alignments.
as we will see in the experimental results, the quality of a reordering model trained from automatic alignments is very sensitive to the quality of alignments.
contrasting
train_2507
The motivation for this is to explore if a good model can be learned even from a small amount of data if we restrict the number of features in a reasonable manner.
we see that even with only 2.4K sentences with manual word alignments our model benefits from lexical identities of more than the 1000 most frequent words.
contrasting
train_2508
These specifications were written for human programmers with no intention of providing support for automated processing.
when trained using the noisy supervision, our method achieves substantially more accurate translations than a state-of-the-art semantic parser (Clarke et al., 2010) (specifically, 80.0% in F-Score compared to an F-Score of 66.7%).
contrasting
train_2509
We use the technique described in Section 4.1 to determine the gender of a speaker of the current utterance.
with EM2010, this is not a hard constraint.
contrasting
train_2510
This renders our gender feature virtually useless, and results in lower accuracy on anaphoric speakers than on explicit speakers.
chekhov prefers to mention speaker names in the dialogues (46% of utterances are in the explicit-speaker category), which makes his prose slightly easier in terms of speaker identification.
contrasting
train_2511
Representing the probability of DOD as θ, after n occurrences of the verb the probability that y of them are DOD is: Considering that p(y| θ, n) is the likelihood in a Bayesian framework, the simplest and the most intuitive estimator of θ, given y in n verb occurrences, is the Maximum Likelihood Estimator (MLE): θ MLE is viable as a learning model in the sense that its accuracy increases as the amount of evidence for a verb grows (n → ∞), reflecting the incremental, on-line character of language learning.
one well known limitation of MLE is that it assigns zero probability mass to unseen events.
contrasting
train_2512
When learning begins, the prior probability is the only source of information for a learner and, as such, dominates the value of the posterior probability.
in the large sample limit, it is the likelihood that dominates the posterior distribution regardless of the prior.
contrasting
train_2513
For example, in 'John bought my car John sold my car' the inference is invalid due to an inherently incorrect rule, but the context is valid.
in 'my boss raised my salary my boss constructed my salary' the context {'my boss', 'my salary'} for applying 'raise → construct' is invalid.
contrasting
train_2514
Our first measure provides a nonparametric similarity by comparing the similarity of the rankings for intersection of the senses in both semantic signatures.
we additionally weight the similarity such that differences in the highest ranks are penalized more than differences in lower ranks.
contrasting
train_2515
(2009) and Hughes and Ramage (2007), which represent word meaning as the multinomials produced from random walks on the WordNet graph.
unlike our approach, neither of these disambiguates the two words being compared, which potentially conflates the meanings and lowers the similarity judgment.
contrasting
train_2516
", and person "A single human being; an individual" all align with someone n:1 (00007846-n).
two distinct definitions within the same Wiktionary entry should not map to the same WordNet sense.
contrasting
train_2517
Because we use a different approach, it would be possible to merge the two if the licenses allowed us to.
since the CC BY-SA and CC-BY-NC-SA licenses are mutually exclusive, the two works cannot be combined and rereleased unless relevant parties can relicense the works.
contrasting
train_2518
It can be downloaded to a local cluster, but the transfer cost is prohibitive at roughly 10 cents per gigabyte, making the total over $8000 for the full dataset.
3 it is unnecessary to obtain a copy of the data since it can be accessed freely from Amazon's Elastic Compute Cloud (EC2) or Elastic MapReduce (EMR) services.
contrasting
train_2519
This is different from running a single Map-Reduce job over the entire dataset, since websites in different subsets of the data cannot be matched.
since the data is stored as it is crawled, it is likely that matching websites will be found in the same split of the data.
contrasting
train_2520
As in McDonald (2006) and Clarke and Lapata (2008), our sequence-based compression model makes a binary "keep-or-delete" decision for each word in the sentence.
however, we Figure 1: Diagram of tree-based compression.
contrasting
train_2521
The current scorer Score Basic is still fairly naive in that it focuses only on features of the sentence to be compressed.
extra-sentential knowledge can also be important for queryfocused MDS.
contrasting
train_2522
(2006), which either uses minimal non-learning-based compression rules or is a pure extractive system.
our compression system sometimes generates less grammatical sentences, and those are mostly due to parsing errors.
contrasting
train_2523
MSA can thus find the best way to align the sequences with insertions or deletions in accordance with the scorer.
computing an optimal MSA is NP-complete (Wang and Jiang, 1994), thus we implement an approximate algorithm (Needleman and Wunsch, 1970) that iteratively aligns two sequences each time and treats the resulting alignment as a new sequence 2 .
contrasting
train_2524
A typical NLG system contains three main components: (1) Document (Macro) Planning -deciding what content should be realized in the output and how it should be structured; (2) Sentence (Micro) planninggenerating a detailed sentence specification and selecting appropriate referring expressions; and (3) Surface Realization -generating the final text after applying morphological modifications based on syntactic rules (see e.g., Bateman and Zock (2003), Reiter and Dale (2000) and McKeown (1985)).
document planning is arguably one of the most crucial components of an NLG system and is responsible for making the texts express the desired communicative goal in a coherent structure.
contrasting
train_2525
Typically, knowledge-based NLG systems are implemented by rules and, as mentioned above, have a pipelined architecture for the document and sentence planning stages and surface realization (Hovy, 1993;Moore and Paris, 1993).
document planning is arguably the most important task (Sripada et al., 2001).
contrasting
train_2526
(2013) where, in a given corpus, a combination of domain specific named entity tagging and clustering sentences (based on semantic predicates) were used to generate templates.
while the system consolidated both sentence planning and surface realization with this approach (described in more detail in Section 3), the document plan was given via the input data and sequencing information was present in training documents.
contrasting
train_2527
3 As indicated in Figure 2, the BLEU-4 scores are low for all DocSys and DocBase generations (as compared to DocOrig) for each domain.
the METEOR scores, while low overall (ranging from .201-.437) are noticeably increased over .
contrasting
train_2528
The biography domain illus- trates this scenario ( Figure 5) where the results are similar to the text-understandability experiments.
for weather domain, sentences from DocBase system were preferred to our system's ( Figure 6).
contrasting
train_2529
The bilingual lexicon can only express the translation equivalence between source-and tar-get-side word pair and has little ability to deal with word reordering and idiom translation.
phrase pair induction can make up for this deficiency to some extent.
contrasting
train_2530
It is obvious that, it is relatively high when using the News training data to evaluate the same News test set.
when the test domain is changed, the translation performance decreases to a large extent.
contrasting
train_2531
In tranductive learning, in-domain lexicon and both-side monolingual data have been explored.
this method does not take full advantage of both-side monolingual data because it uses source and target monolingual data individually.
contrasting
train_2532
in a text about finance, bank is likely to refer to a financial institution while in a text about geography, it is likely to refer to a river bank).
in the classic WSD task, ambiguous word types and a set of possible senses are known in advance.
contrasting
train_2533
Given monolingual text in a new domain, OOVs are easy to identify, and their translations can be acquired using dictionary extraction techniques (Rapp, 1995;Fung and Yee, 1998;Schafer and Yarowsky, 2002;Schafer, 2006;Haghighi et al., 2008;Mausam et al., 2010;Daumé III and Jagarlamudi, 2011), or active learning (Bloodgood and Callison-Burch, 2010).
previously seen (even frequent) words which require new translations are harder to spot.
contrasting
train_2534
If it were the case that only one sense existed for all word tokens of a particular type within a single domain, we would expect our word type features to be able to spot new senses without the help of the word token features.
in fact, even within a single domain, we find that often a word type is used with several senses, suggesting that word token features may also be useful.
contrasting
train_2535
The idea is that, if a word is used in one of the known senses then its context must have been seen previously and hence we hope that the PSD classifier outputs a spiky distribution.
if the word takes a new sense then hopefully it is used in an unseen context resulting in the PSD classifier outputting an uniform distribution.
contrasting
train_2536
We observe that words that are less frequent in both the old and the new domains are more likely to have a new sense than more frequent words, which causes the CONSTANT base-line to perform reasonably well.
it is more difficult for our models to make good predictions for less frequent words.
contrasting
train_2537
Again, our models perform better under a macro-level evaluation than under a micro-level evaluation.
in contrast to the SENSESPOTTING results, token-level features perform quite well on their own for all domains.
contrasting
train_2538
The knowledge engineering approach has been used in early grammatical error correction systems (Murata and Nagao, 1993;Bond et al., 1995;Bond and Ikehara, 1996;Heine, 1998).
as noted by (Han et al., 2006), rules usually have exceptions, and it is hard to utilize corpus statistics in handcrafted rules.
contrasting
train_2539
Suppose the corrected output of a system to be evaluated is exactly this perfectly corrected sentence The data is similar to the test set.
the ofcial HOO scorer using GNU Wdiff will automatically extract only one system edit with → to the for this system output.
contrasting
train_2540
The most direct comparison is SPIDER's F-score of 39.7% compared to his LSW03 algorithm's 35.6% (both are minimality resolvers).
our evaluation is more penalized since SPIDER loses precision for NER's false positives (Jack London as a location) while Leidner only evaluated on actual locations.
contrasting
train_2541
With regard to methodology, most of previous studies on argument extraction recast it as a Semantic Role Labeling (SRL) task and focus on intra-sentence information to identify the arguments and their roles.
argument extraction is much different from SRL in the sense that, while the relationship between a predicate and its arguments in SRL can be mainly decided from the syntactic structure, the relationship between an event trigger and its arguments are more semantics-based, especially in Chinese, as a paratactic (e.g., discourse-driven and pro-drop) language with the wide spread of ellipsis and the open flexible sentence structure.
contrasting
train_2542
We will use this gold information also in Section 6.1 to show that our system aligns well to the state of the art on the ACE 2004 benchmark.
in a realistic setting this information is not available or noisy.
contrasting
train_2543
Removing gold entity and mention information results in a significant F1 drop from 66.3% to 54.2%.
in a realistic setting we do not have gold entity info available, especially not in the case when we apply the system to any kind of text.
contrasting
train_2544
In the latter work we also tackled the problem of unsupervised phonemic prediction for unknown languages by using textual regularities of known languages.
we assumed that the target language was written in a known (Latin) alphabet, greatly reducing the difficulty of the prediction task.
contrasting
train_2545
This extra information may help with data sparsity, providing better estimates for rare and unseen n-grams.
there is still only modest overlap between the sentences for longer n-grams, particularly given that the corpus is sentencealigned and that 27% of the sentence pairs in this aligned data set are identical.
contrasting
train_2546
Because the data is aligned and therefore similar, we see the perplexity curves run parallel to each other as more data is added.
even though these sentences represent the same content, the language use is different between simple and normal and the normal data performs consistently worse.
contrasting
train_2547
"the victim"), (iii) the empty RE.
to the GREC data sets, our RE candidates are not represented as the original surface strings, but as non-linearized subtrees.
contrasting
train_2548
When REG and linearization are applied on shallowSyn −re with gold shallow trees, the BLEU score is lower (60.57) as compared to the system that applies syntax and linearization on deepSyn +re , deep trees with gold REs (BLEU score of 63.9).
the BLEU r score, which generalizes over lexical RE mismatches, is higher for the REG→LIN components than for SYN→LIN.
contrasting
train_2549
The REG module benefits from the linearization in the case of deeplinSyn −re and prelinSyn −re , outperforming the component trained applied on the non-linearized deep syntax trees.
the REG module applied on prelinSyn −re , predicted shallow and linearized trees, clearly outperforms the module applied on deeplinSyn −re .
contrasting
train_2550
One of the most effective orthographic features is capitalization in English, which helps NER to generalize to new text of different genres.
capitalization is not very useful in some languages such as German, and nonexistent in other languages such as Arabic.
contrasting
train_2551
For example, Arabic allows the attachment of pronouns to nouns and verbs.
pronouns are rarely ever attached to named entities.
contrasting
train_2552
We also tested NER on the TWEETS test set, where we observed substantial improvements in recall (35.9%).
precision dropped by 6.2% for the reasons we mentioned earlier.
contrasting
train_2553
In this paper, we focus on providing a solution for social media text normalization as a preprocessing step for NLP applications.
this is a challenging problem for several reasons.
contrasting
train_2554
Traditionally, an OOV word is defined as a word that does not exist in the vocabulary of a given system.
this definition is not adequate for the social media text which has a very dynamic nature.
contrasting
train_2555
Our approach has some similarities with (Han et al., 2012) since both approaches utilize a normaliza-tion lexicon acquired form unlabeled data using distributional and string similarities.
our approach is significantly different since we acquire the lexicon using random walks on a contextual similarity graph which has a number of advantages over the pairwise similarity approach used in (Han et al., 2012).
contrasting
train_2556
The random walks will traverse more frequent paths; which would lead to more probable normalization equivalence.
the proposed approach provides high recall as well which is hard to achieve with higher precision.
contrasting
train_2557
Moreover, the lexicon contains some entries with foreign language words normalized to its English translation.
the lexicon has some bad normalization such as "'unrecycled "' which should be normalized to "'non recycled"' but since the system is limited to one word correction it did not get it.
contrasting
train_2558
To avoid unmodeled factors affecting the evaluation, we have carefully removed noise from our experiment datasets.
real Web pages are more complex; there are often instances of sentence fragments, such as captions and navigational link text.
contrasting
train_2559
These systems have the ability to handle questions with complex semantics on small domain-specific databases like GeoQuery (Tang and Mooney, 2001) or subsets of Freebase (Cai and Yates, 2013), but have yet to scale to the task of general, open-domain question answering.
our system answers questions with more limited semantics, but does so at a very large scale in an open-domain manner.
contrasting
train_2560
4 The RE-VERB database contains a large cross-section of general world-knowledge, and thus is a good testbed for developing an open-domain QA system.
the extractions are noisy, unnormalized (e.g., the strings obama, barack-obama, and president-obama all appear as distinct entities), and ambiguous (e.g., the relation born-in contains facts about both dates and locations).
contrasting
train_2561
Many of these problems and aid activities were reported via Twitter.
most problem reports and corresponding aid messages were not successfully exchanged between victims and local governments or humanitarian organizations, overwhelmed by the vast amount of information.
contrasting
train_2562
An evident alternative to this approach is to use sentiment analysis (Mandel et al., 2012;Tsagkalidou et al., 2011) assuming that problem reports should include something 'bad' while aid messages describe something 'good'.
we will show that this does not work well in our experiments.
contrasting
train_2563
They refer to events that activate troubles.
(B) is exemplified by ⟨school, X is collapsed⟩, ⟨battery, X runs out⟩, which imply that some non-trouble objects such as resources, appliances and facilities are dysfunctional.
contrasting
train_2564
They represent the dysfunction of troubles and can mean the solution or the settlement of troubles.
examples of (D) include ⟨school, X re-build⟩ and ⟨baby formula, buy X⟩.
contrasting
train_2565
Word Sentiment Polarity (WSP) As we suggested before, full-fledged sentiment analysis to recognize the expressions, including clauses and phrases, that refer to something good or bad was not effective in our task.
the sentiment polarity, assigned to single words turned out to be effective.
contrasting
train_2566
Most methods rely on local information (bag-of-words, short ngrams or elementary syntactic fragments) and do not attempt to account for more complex interactions.
these local lexical representations by themselves are often not sufficient to infer a sentiment or aspect for a fragment of text.
contrasting
train_2567
The clause let's not talk about the view could by itself be neutral or even positive given the right context (e.g., I've never seen such a fancy hotel room, my living room doesn't look that cool... and let's not talk about the view).
the contrast relation signaled by the connective but makes it clear that the second clause has a negative polarity.
contrasting
train_2568
Discourse cues typically do not directly indicate sentiment polarity (or aspect).
they can indicate how polarity (or aspect) changes as the text unfolds.
contrasting
train_2569
Note that we do not have a discourse relation SameSame since we do not expect to have strong linguistic evidence which states that an EDU contains the same sentiment information as the previous one.
4 we assume that the sentiment and topic flow is fairly smooth in general.
contrasting
train_2570
(2006), which jointly extracts opinion expressions, holders and their IS-FROM relations using an ILP approach.
our approach (1) also considers the IS-ABOUT relation which is arguably more complex due to the larger variety in the syntactic structure exhibited by opinion expressions and their targets, (2) handles implicit opinion relations (opinion expressions without any associated argument), and (3) uses a simpler ILP formulation.
contrasting
train_2571
Epistemological bias has not received much attention since it does not play a major role in overtly opinionated texts, the focus of much research on stance recognition.
our logistic regression model reveals that epistemological and other features can usefully augment the traditional sentiment and subjectivity features for addressing the difficult task of identifying the biasinducing word in a biased sentence.
contrasting
train_2572
We present a city navigation and tourist information mobile dialogue app with integrated question-answering (QA) and geographic information system (GIS) modules that helps pedestrian users to navigate in and learn about urban environments.
to existing mobile apps which treat these problems independently, our Android app addresses the problem of navigation and touristic questionanswering in an integrated fashion using a shared dialogue context.
contrasting
train_2573
Another user observation was that they did not have to stop to listen to information presented by our system (as it was in speech) and could carry on walking.
with the baseline system, they had to stop to read information off the screen.
contrasting
train_2574
Procedural dialog systems can help users achieve a wide range of goals.
such systems are challenging to build, currently requiring manual engineering of substantial domain-specific task knowledge and dialog management strategies.
contrasting
train_2575
Also related are studies on linguistic style accommodation (Mukherjee and Liu, 2012d) and user pair interactions (Mukherjee and Liu, 2013) in online debates.
these works do not consider tolerance analysis in debate discussions, which is the focus of this work.
contrasting
train_2576
They show that choices in the methodology, such as data sets, evaluation metrics and type of cross-validation can influence the conclusions of an experiment, as we also find in our second use case.
they focus on the problem of evaluation and recommendations on how to achieve consistent reproducible results.
contrasting
train_2577
In principle, it is to be expected that numbers are not exactly the same while evaluating against a different data set (the mc-set versus the rg-set), taking a different set of synsets to evaluate on (changing PoS-tag restrictions) or changing configuration settings that influence the similarity score.
a variation of up to 0.44 points in Spearman ρ and 0.60 in Kendall τ 6 leads to the question of how indicative these results really are.
contrasting
train_2578
This effect is not observed in wup and lch which correct for the depth of the hierarchy.
res, lin and jcn score better on the same set when verbs are considered, because they cannot detect any relatedness for the pair crane-implement when restricted to nouns.
contrasting
train_2579
The (fictional) Zigglebottom tagger is a tagger with spectacular results that looks like it will solve some major problems in your system.
the code is not available and a new implementation does not yield the same results.
contrasting
train_2580
Most of the research and evaluation considers query segmentation as a process analogous to identification of phrases within a query which when put within double-quotes (implying exact matching of the quoted phrase in the document) leads to better IR performance.
this is a very restricted view of the process and does not take into account the full potential of query segmentation.
contrasting
train_2581
2011have created larger annotated datasets through crowdsourcing 3 .
in their approach the crowd is provided with a few (four) possible segmentations of a query to choose from (known through a personal communication with a authors).
contrasting
train_2582
Similarly, Bharati et al (1995) defines it as Noun Group and Verb Group based only on local surface information.
cognitive and annotation experiments for chunking of English (Abney, 1992) and other language text (Bali et al., 2009) have shown that native speakers agree on major clause and phrase boundaries, but may not do so on more fine-grained chunks.
contrasting
train_2583
Honey pots or trap questions whose answers are known a priori are often included in a HIT to identify turkers who are unable to solve the task appropriately leading to incorrect annotations.
this trick cannot be employed in our case because there is no notion of an absolutely correct segmentation.
contrasting
train_2584
Various studies, including retrieving the accumulated question-answer pairs to find the related answer for a new question, finding the expert in a specific domain, summarizing single or multiple answers to provide a concise result, are conducted in the Community QA sites (Jeon et al., 2005;Adamic et al., 2008;Liu et al., 2008;Song et al., 2008;Si et al., 2010a;Figueroa and Atkinson, 2011).
an important issue which has been neglected so far is the detection of deceptive answers.
contrasting
train_2585
Some of the previous work (Song et al., 2010;Ishikawa et al., 2010;Bian et al., 2009) views the "best answer" as high quality answers, which are selected by the askers in the Community QA sites.
the deceptive answer may be selected as high-quality answer by the spammer, or because the general users are mislead.
contrasting
train_2586
Based on this assumption, they can extract the expert in the community, as discussed in Section 3.2.3.
we don't care which user is more knowledgeable, but are more interested in whether two users are both malicious users or authentic users.
contrasting
train_2587
The previous studies (Jeon et al., 2006;Song et al., 2010) find that the word features can't improve the performance on answer quality prediction.
from Table 1, we can see that the word features can provide some weak signals for deceptive answer prediction, for example, words "recommend", "address", "professional" express some kinds of promotion intent.
contrasting
train_2588
For answer quality prediction, length is an effective feature, for example, long-length provides very strong signals for high-quality answer (Shah and Pomerantz, 2010;Song et al., 2010).
for deceptive answer prediction, we find that the long answers are more potential to be deceptive.
contrasting
train_2589
Since this idea looks quite intuitive, many people would probably consider it as a solution to why-QA.
to our surprise, we could not find any previous work on why-QA that took this approach.
contrasting
train_2590
", although the "tsunamis" are modified by different predicates; "cause" and "generate."
effect part "tsunamis weaken as they pass through forests" of A4 implies that "tsunamis" are suppressed.
contrasting
train_2591
This approach was later improved by Wang and Manning (2010) with a tree-edit CRF model that learns the latent alignment structure.
general tree matching methods based on tree-edit distance have been first proposed by Punyakanok et al.
contrasting
train_2592
A recurrent neural network language model (Mikolov et al., 2010) aims to estimate the probability of observing a word given its preceding context.
one by-product of this model is the word embedding learned in its hidden-layer, which can be viewed as capturing the word meaning in some latent, conceptual space.
contrasting
train_2593
(2007) presented a generative probabilistic model based on a Quasi-synchronous Grammar formulation and was later improved by Wang and Manning (2010) with a tree-edit CRF model that learns the latent alignment structure.
heilman and System MAP MRR Wang et al.
contrasting
train_2594
Overall, by simply incorporating more information on word relations, we gain approximately 10 points in both MAP and MRR compared to surface-form matching (Line #4 vs. #2), consistently across all three models.
adding more information like named entity matching and answer type verification does not seem to help much (Line #5 vs. #4).
contrasting
train_2595
For example in Figure 1, "kindly" and "courteous" are incorrectly regarded as the modifiers for "foods" if the WAM is performed in an whole unsupervised framework.
by using some high-precision syntactic patterns, we can assert "courteous" should be aligned to "services", and "delicious" should be aligned to "foods".
contrasting
train_2596
As mentioned in (Liu et al., 2013), using PSWAM can not only inherit the advantages of WAM: effectively avoiding noises from syntactic parsing errors when dealing with informal texts, but also can improve the mining performance by using partial supervision.
is this kind of combination always useful for opinion target extraction?
contrasting
train_2597
Compared with State-of-the-art Methods.
it's not say that this combination is not useful.
contrasting
train_2598
Previous pattern learning algorithms (Zhuang et al., 2006;Kessler and Nicolov, 2009;Jijkoun et al., 2010) often extract opinion patterns by frequency.
some high-frequency syntactic patterns can have very poor precision (Kessler and Nicolov, 2009).
contrasting
train_2599
One straightforward option for Linear Programming formulation may seem like using the same Integer Linear Programming formulation introduced in §2.3, only changing the variable definitions to be real values ∈ [0, 1] rather than integers.
because the hard constraints in §2.3 are defined based on the assumption that all the variables are binary integers, those constraints are not as meaningful when considered for real numbers.
contrasting