id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_7000
This quantity can be interpreted as the number of times that a human reader would have to "jump" between words to recover the correct translation order.
no distinction is made between short and long-range reordering errors.
contrasting
train_7001
(2011) introduce yet another reordering-specific metric, called fuzzy reordering score (FRS) which, like the KRS, is independent from lexical choice and measures the similarity between a sentence's reference reordering and the reordering produced by an SMT system (or by a pre-ordering technique).
whereas Birch, Osborne, and Blunsom (2010) used Kendall's tau between the two sentence permutations, Talbot et al.
contrasting
train_7002
As also noted by Fox (2002), this kind of reordering is not strictly necessary to produce accurate and fluent translations, but its occurrence in parallel corpora affects the automatic reordering measures.
a qualitative analysis can profit from the extensive work done by linguists and grammaticians to abstract the fundamental properties of a language.
contrasting
train_7003
(2008) and Birch, Blunsom, and Osborne (2009), PSMT performs similarly or better than HSMT for the Arabic-to-English language pair.
hSMT was shown to better cope with the reordering of VSO sentences (Bisazza 2013).
contrasting
train_7004
French and Arabic [ Main order: different; CDiff: 1.5; PDiff: 1 ] At the clause level, this pair differs in main word order (SVO versus VSO or SVO) like the English-Arabic pair, but also in the order of negation and verb.
phrase-level order is notably more similar, with only one discordant feature of minor importance (adjective and degree word).
contrasting
train_7005
More specifically, for a fixed CFG, we can solve the recognition problem in time cubic in the length of the input string, using standard dynamic programming techniques.
for a fixed SCFG we can still recognize the input pair [u, v] in polynomial time, but the degree of the polynomial depends on the specific structure of the synchronous rules of the grammar, and can be much larger than in the CFG case.
contrasting
train_7006
Chart parsing can be generalized to SCFGs.
in the latter case parse trees no longer generate single substrings: They rather generate tuples with several substrings.
contrasting
train_7007
There are recent attempts to address some of these issues by using alternative characterizations of word meaning that do not involve creating a partition of usages into senses (McCarthy and Navigli 2009;Erk, McCarthy, and Gaylord 2013), and by asking WSI systems to produce soft or graded clusterings (Jurgens and Klapaftis 2013) where tokens can belong to a mixture of the clusters.
these approaches do not overtly consider the location of a lemma on the continuum, but doing so should help in determining an appropriate representation.
contrasting
train_7008
AIC depends on the sample size (through p(Y|M)), so in order to be able to compare all models that model the same partitionability estimate, we compute AIC only on the subset of lemmas that enters in all analyses.
22 we compute the F test on all lemmas where the clusterability measure is valid, 23 in order to use the largest possible set of lemmas to test the viability of a model.
contrasting
train_7009
For example, our previous work Ng 2009, 2012) experimented with various techniques for combining a small bitext for a resource-poor language (Indonesian or Spanish) with a much larger bitext for a related resource-rich language (Malay or Portuguese), pretending that Spanish is resource-poor; the target language of all bitexts was English.
that work did not attempt language adaptation, except for very simple transliteration for Portuguese-Spanish that ignored context entirely; because it does not substitute a word with a completely different word, transliteration did not help much for Malay-Indonesian, which use unified spelling.
contrasting
train_7010
The target sentence is shown in T. Because the decoders used in SMT and ASR typically work at the phrase-or the word-level, they cannot make use of sentence-level features.
our text rewriting decoder works at the sentence-level, that is, all hypotheses are complete sentences.
contrasting
train_7011
We believe this is because lattice encodes many options, but does not use a Malay language model, and 1-best uses a Malay language model, but has to commit to 1-best.
cN:word uses both n-best outputs and an Indonesian language model.
contrasting
train_7012
For instance, for the tweet Yoona taking the ' ' (be healthy)ˆˆ, the segment (be healthy would not be valid, because it does not contain the parenthesis closer ).
both (be healthy) and be healthy are acceptable.
contrasting
train_7013
Thus, if these are the only characters that are aligned during the computation of the translation score, both languages are equally likely to be correct.
the Hiragana characters , , and that surround it only exist in Japanese, so the entire sequence is much more likely to be Japanese than Chinese.
contrasting
train_7014
Once P L (x | l) is computed for all possible values of l and x, calculating the language score can be trivially computed.
computing the translation score φ T requires the computation of the word alignments in φ T over all possible segmentations, which requires O(|x| 6 ) operations using a naive approach [O(|x| 4 ) segmentation candidates, and O(|x| 2 ) operations for aligning each pair of segments].
contrasting
train_7015
For instance, the span A 2,3,4,5 can be reached using λ +v (A 2,3,4,4 ), λ +u (A 2,3,3,5 ), λ +q (A 2,2,4,5 ), or λ +p (A 1,3,4,5 ).
it is desirable to apply λ +u whenever possible, because it only requires one operation.
contrasting
train_7016
If we are looking for a particular language pair, this is fine, but when we search for translations in a large number of language pairs, this is impractical.
in most cases, many language pairs can be trivially ruled out.
contrasting
train_7017
The previous section describes a method for finding the most likely segments p, q, u, v and the language pair l, r and the alignments a, according to the IDA model score, for any document x.
extracting parallel sentences using this model requires addressing other issues, such as identifying the tweets that contain translations from those that do not.
contrasting
train_7018
Section 4.3 presented a supervised method to train a classifier that discriminates parallel and non-parallel data.
training requires annotated instances where parallel and non-parallel segments are identified.
contrasting
train_7019
We will rely on crowdsourcing to produce annotated data sets, which has been successfully used to generate parallel data (Ambati and Vogel 2010; Zaidan and Callison-Burch 2011; Post, Callison-Burch, and Osborne 2012; Ambati, Vogel, and Carbonell 2012).
these methods have been focused on using workers to translate documents.
contrasting
train_7020
For instance, the ratio between parallel and non-parallel tweets for Arabic-English in Twitter is 2:1 in the annotated data sets, which is definitely not the case in a uniformly extracted data set.
6 performing the annotation in a uniformly extracted data set is problematic for two reasons.
contrasting
train_7021
In some cases, these improvements can also be the result of overfitting.
a strong indication that this is not the case is the fact that similar results can be obtained for the Twitter test set, where we can observe a BLEU improvement from 9.55, using the NIST corpus, to 23.57, using the Weibo corpus.
contrasting
train_7022
As one could expect, shifted addition is on average closer to actual PMI values than plain addition.
weighted addition provides better approximations to the observed values.
contrasting
train_7023
The constructive proofs presented in Section 2.3 and 2.4 define two kinds of oracles.
they are not directly applicable when the transition combination strategy is utilized.
contrasting
train_7024
For any graph, we can call this algorithm and get a corresponding tree.
the tree is informative only when the given graph is dense enough.
contrasting
train_7025
The objective of training a symbol-refined bigram tagger is to solve the LA-involved emission and transition parameters by maximizing the likelihood of the training data.
with a non-symbol-refined HMM tagger, where the POS tags are observed, the latent annotations are unseen variables.
contrasting
train_7026
A natural strategy for extending current experiments is to include both clustering results together.
we find no further improvement.
contrasting
train_7027
Note that the overall tagging performance of the Berkeley parser is significantly worse than the sequence models.
better POS tagging does not lead to better parsing.
contrasting
train_7028
The LCFRS shown here does not satisfy our normal form requiring each rule to have either two nonterminals on the right-hand side with no terminals in the composition function, or zero nonterminals with a composition function returning fixed strings of terminals.
it can be converted to such a form through a process analogous to converting a CFG to Chomsky Normal Form.
contrasting
train_7029
As long as we move the first index from row to column, rather than from column to row, the intermediate results will require addresses no longer than the length of i.
if ϕ(B) = d, then every configuration in which B appears is balanced: If ϕ(B) = d and B appears in more than one configuration, that is, |config(B)| > 1, it is impossible to copy entries for B between the cells using a matrix of size (2n) d .
contrasting
train_7030
In order to optimize the complexity of our algorithm, we want to minimize d, which is the maximum over all rules A → B C of For a fixed binarized grammar, d is always less than p, the tabular parsing complexity, and, hence, the optimal d * over binarizations of an LCFRS is always less than the optimal p * for tabular parsing.
whether any savings can be achieved with our algorithm depends on whether ωd * < p * , or ωd * + 1 < p * in the case of balanced grammars.
contrasting
train_7031
Subsequent readability research by Crossley, Greenfield, and McNamara (2008) looked only at content overlap and showed it to be a significant feature.
similar work by Pitler and Nenkova (2008) did not lead to the same conclusion.
contrasting
train_7032
(2010) found that enlarging the corpus, which exclusively consisted of texts for primary school children, with more diverse text material allowed for an overall better performance.
the added value of the discourse relations to the system was still not significant.
contrasting
train_7033
We can conclude that the introduction of more complex linguistic features has indeed proven useful.
the discussion on which features are the best predictors remains open.
contrasting
train_7034
For the PoS-related features, we observe a clear difference between the English and Dutch data sets in that 78% of the English features versus only 48% of the Dutch features correlate (i.e., 21 versus 13 out of 27 to be exact).
for both languages at least one feature representing the five main part-of-speech classes (nouns, adjectives, verbs, adverbs, and prepositions) does correlate.
contrasting
train_7035
Having a general characterbased approach that works well across languages could be useful, for example, in a streamlined intelligence application that is required to work efficiently on a wide range of languages for which NLP tools or language experts might not be readily available, or where the user does not desire a complex customization for each language.
kernels based on a different kind of information (for example, syntactic and semantic information) can be combined with string kernels via MKL to improve accuracy in some specific situations.
contrasting
train_7036
The empirical results shown in Table 5 indicate that the presence bits kernel and the kernel based on LRD obtain better results on the raw text documents.
the results of the intersection kernel are roughly the same, or only slightly better than, the results presented in Table 4.
contrasting
train_7037
(2012), whereas KRR based on the same kernel reaches an accuracy of only 82.3%, which is 2.3 percentage points lower than the state-of-the-art (84.6%).
kRR overturns the results when the two kernels are combined through MkL.
contrasting
train_7038
Diving into details, we can see that the results obtained by KRR are higher than those obtained by KDA.
both methods perform very well compared with the state-of-the-art.
contrasting
train_7039
The translation function deals with this by postponing this decision with the help of λ-bound formulas representing roles.
when we start translating a fresh AMR, we start with the root node.
contrasting
train_7040
Speakers may thus make use of specific words or stylistic elements to represent themselves in a certain way.
because of this agency, social variables cease to have an essential connection with language use.
contrasting
train_7041
Sociolinguists must then also select an appropriate methodology.
typical methods used within sociolinguistics would require sampling the data down.
contrasting
train_7042
For example, based on data from Twitter (a popular microblogging site) dialectal variation has been mapped using a fraction of the time, costs, and effort that was needed in traditional studies (Doyle 2014).
data from CMC are not always easy to collect.
contrasting
train_7043
Furthermore, although historically the field of sociolinguistics started with a major focus on phonological variation (e.g., Labov 1966), the use of social media data has led to a higher focus on lexical variation in computational sociolinguistics.
there are concerns that a focus on lexical variation without regard to other aspects may threaten the validity of conclusions.
contrasting
train_7044
2014recruited participants using e-mail, social media, and blogs, which resulted in a sample that was likely to be biased towards linguistically interested people.
they did not expect that the possible bias in the data influenced the findings much.
contrasting
train_7045
Sociolinguistic studies have found that adolescents use the most nonstandard forms, because at a young age the group pressure to not conform to established societal conventions is the largest (Eckert 1997;Holmes 2013).
adults are found to use the most standard language, because for them social advancement matters and they use standard language to be taken seriously (Eckert 1997;Bell 2013).
contrasting
train_7046
We discussed variables such as gender, age, and geographical location, thereby mostly focusing on the influence of social structures on language use.
as we also pointed out, speaker agency enables violations of conventional language patterns.
contrasting
train_7047
Furthermore, most studies within computational linguistics generally assume that texts are written in one language.
these assumptions may not hold, especially in social media.
contrasting
train_7048
To make use of the rich repertoire of theory and practice from sociolinguistics and to contribute to it, we have to appreciate the methodologies that underlie sociolinguistic research, for example, the rules of engagement for joining in the ongoing scientific discourse.
as we have highlighted in the methodology discussion (Section 2), the differences in values between the communities can be perceived as a divide.
contrasting
train_7049
Sociolinguistic studies typically control for multiple social variables (e.g., gender, age, social class, ethnicity).
many studies in computational sociolinguistics focus on individual variables (e.g., only gender, or only age), which can be explained by the focus on social media data.
contrasting
train_7050
A Formal Distributional Semantics thus holds the promise of developing a more comprehensive model of meaning.
given the fundamentally different natures of FS and DS, building an integrative framework poses theoretical and engineering challenges.
contrasting
train_7051
Some approaches to composition can actually be seen as sitting between the F-first and D-first approaches: In Coecke, Sadrzadeh, and Clark (2011) and Grefenstette and Sadrzadeh (2011), a CCG grammar is converted into a tensor-based logic relying on the direct composition of distributional representations.
with their F-first counterparts, D-first FDSs regard distributions as the primary building blocks of the sentence, which must undergo composition rules to get at the meaning of longer constituents (see above).
contrasting
train_7052
FDS promises to give us a much better coverage of natural language than either Formal or Distributional Semantics.
much remains to be done; here we address some prominent limitations of current approaches and propose directions for future research.
contrasting
train_7053
This would capture the intuition we spelled out in the introduction that a wolf is a better non-dog than a screwdriver.
the proposal of Hermann and colleagues is, again, purely theoretical, and we do not see how domain and value features with the desired properties could be induced from corpora on a large scale.
contrasting
train_7054
As usually assumed, the alternative set for a sentence is the set of semantic values resulting from replacing the negated element with arbitrary values of the right semantic type.
it is clear that not all alternatives are created equal.
contrasting
train_7055
13 Supervision alone has virtually no effect on performance, and composition alone has a strong negative effect.
by combining composition and supervision, we obtain an increase in correlation of about 1% for both IT and THERE.
contrasting
train_7056
We recognize that the discussion in this conclusion is very speculative in nature, and much empirical and theoretical work remains to be done.
we hope to have demonstrated that accounting for negation, far from being one of the weak points of this formalism, is one of the most exciting directions in the development of a fully linguistically motivated theory of distributional semantics.
contrasting
train_7057
In that task, methods were tested on their ability to match a term (in this case a verb) with its definition (usually of the form verb-object), such as embark: enter boat or vessel.
the definitions of Kartsaklis, Sadrzadeh, and Pulman were mined from a set of dictionaries, whereas the RELPRON relative clauses are not limited to dictionary definitions.
contrasting
train_7058
Because there is an intuitive definition-like "flavor" to some relative clauses, we considered using dictionary definitions as a source of relative clauses for the data set.
in practice we found that short, natural definitions in relative clause format are rare in dictionaries.
contrasting
train_7059
The relative clauses in RELPRON are restrictive relative clauses, because they narrow down the meaning of the head noun.
a non-restrictive relative clause provides incidental situational information; for example, a device, which was in the room yesterday.
contrasting
train_7060
Despite this fact, there is a large literature investigating how such operators can be used for phrasal composition, starting with the work of Lapata (2008, 2010) (M&L subsequently).
the additive model from M&L has the general form: where u and v are word (column) vectors in R n (e.g., − → fast and − → car), p ∈ R n is the vector for the phrase resulting from composition of u and v ( − −−− → fast car), and A ∈ R n×n and B ∈ R n×n are matrices that determine the contribution of u and v to p. M&L make the simplifying assumption that only the ith components of u and v contribute to the ith component of p, which yields the form: the parameters α, β ∈ R allow the contributions of u and v to be weighted differently, providing a minimal level of syntax-awareness to the model.
contrasting
train_7061
We find that the scores are relatively well-balanced across head nouns, but that more concrete terms and their properties may be easier to model.
based on qualitative observations, a more important factor seems to be term polysemy.
contrasting
train_7062
This may be a feature of the topic domain, in that the activities undertaken by different kinds of sports players-golfer, batter, pitcher, quarterback, and so forth-are distributionally similar.
all methods exhibit only average ability to identify the correct head noun for organization terms, but relatively high ability to select the correct organization properties when the head noun is known.
contrasting
train_7063
person properties, such as person that defends rationalism (philosopher).
19 when restricted to ranking organization properties, as in Table 10, SPLF achieves a perfect MAP of 1.0 for religion.
contrasting
train_7064
Dinu and Lapata (2010) advocate a method that resembles LVW, in that it uses a distribution over latent dimensions in order to measure semantic shifts in context.
whereas their approach computes the contextualized meaning directly within the latent space, the LVW approach we adopt in this article exploits the latent space to determine the features that are important for a particular context, and adapt the original (out-of-context) dependency-based feature vector of the target word accordingly.
contrasting
train_7065
McNally and Boleda (2016) offer empirical and conceptual arguments in favor of the TCL dual approach to meaning and, like us, see DS as an ally in specifying the internal content aspect of composition.
we offer a much more detailed and specific investigation of the interactions between TCL and particular methods of DS composition.
contrasting
train_7066
Following Baroni and Lenci (2010), we use typed dependency relations as the bases for our distributional features, and following Padó and Lapata (2007), we include higher-order dependency relations in this space.
in contrast to previous proposals, the higher-order dependency relations provides structure to the space that is crucial to our definition of composition.
contrasting
train_7067
Because we permit paths that traverse both forwards and backwards along the same dependency-for example, in the co-occurrence white/JJ, AMOD •AMOD, dry/JJit is logical to consider white/JJ, AMOD • DOBJ •DOBJ•AMOD, dry/JJ a valid co-occurrence.
in line with our decision to include white/JJ, , white/JJ rather than white/JJ, AMOD •AMOD, white/JJ , all co-occurrence types are canonicalized through a dependency cancellation process in which adjacent, complementary dependencies are cancelled out.
contrasting
train_7068
For example, many of the dimensions that make sense for verbs, such as those involving a co-occurrence type that begins with DOBJ or NSUBJ, do not make sense for a noun.
as we now explain, the co-occurrence type structure present in an APT allows us to address this, making way for our definition of distributional composition.
contrasting
train_7069
In particular, Baroni and Lenci showed that typed co-occurrences based on grammatical relations were better than untyped cooccurrences for distinguishing certain semantic relations.
as shown by Weeds, Weir, and Reffin (2014), it does not make sense to compose typed features based on firstorder dependency relations through multiplication and addition, because the vector spaces for different parts of speech are largely non-overlapping.
contrasting
train_7070
For example, in a lorry carries apples, there is a path of length 2 between the nouns lorry and apples via the node carry.
they also used a word-based basis mapping, which essentially reduces all of the salient grammatical paths to untyped co-occurrences.
contrasting
train_7071
Hence the phrases happiest blonde person and blonde happiest person receive the same dependency representation and therefore also the same semantic representation.
we believe that our approach is flexible enough to be able to accommodate a more sensitive grammar formalism that does allow for distinctions in modifier scope to be made if an application demands it.
contrasting
train_7072
It does not seem particularly useful at this point to speculate about phenomena that either a distributional approach or a logic-based approach would not be able to handle in principle, as both frameworks are continually evolving.
logical and distributional approaches clearly differ in the strengths that they currently possess (Coecke, Sadrzadeh, and Clark 2011;Garrette, Erk, and Mooney 2011;Baroni, Bernardi, and Zamparelli 2014).
contrasting
train_7073
It is a software package that contains implementations of a variety of MLN inference and learning algorithms.
developing a scalable, general-purpose, accurate inference method for complex MLNs is an open problem.
contrasting
train_7074
The knowledge bases used are WordNet and PPDB.
with our work, PPDB paraphrases are not translated to logical rules (Section 5.3).
contrasting
train_7075
We first notice that the phrasal subset is generally harder than the lexical subset: none of the feature sets on their own provide dramatic improvements over the baseline, or come particularly close to the ceiling score.
using all features together does better than any of the feature groups by themselves, indicating again that the feature groups are highly complementary.
contrasting
train_7076
For this reason, and because crossing dependencies have traditionally been rare in corpora of languages like English, Chinese, or Japanese, many implementations of dependency parsers assume projectivity (Nivre 2006).
crossing dependencies are needed to represent some linguistic phenomena like topicalization, scrambling, wh-movement, or extraposition, so it is necessary for natural language parsers to support non-projectivity, especially when working with languages with flexible word order.
contrasting
train_7077
It is clear that the tasks involved in preprocessing content (normalization, POS tagging, and parsing) are clearly in the realm of natural language processing, and so are tasks such as named entity recognition, opinion mining, and event extraction.
the book does not do a good job of distinguishing what new techniques are required due to distinguishing properties of the social media domain.
contrasting
train_7078
⋆ The car dealer should sell trucks, provide sports cars, be located in France.
many syntactic and linear ordering choices are regulated by soft constraints-that is, yield text of variable acceptability.
contrasting
train_7079
Moreover, taking individual decisions at different sub-tasks in a sequential manner might lead to suboptimal solutions (Marciniak and Strube 2005).
symbolic joint approaches to microplanning lack in robustness and efficiency.
contrasting
train_7080
In the course of distributional clustering, concrete concepts (e.g., water, coffee, beer, liquid) tend to be clustered together when they have similar meanings.
abstract concepts (e.g., marriage, democracy, cooperation) tend to be clustered together when they are metaphorically associated with the same source domain(s) (e.g., both marriage and democracy can be viewed as mechanisms or games).
contrasting
train_7081
2013;Tsvetkov, Mukomel, and Gershman 2013), as well as our own approach, share this intuition.
our methods are different in their aims.
contrasting
train_7082
Unfortunately, solving this minimization problem is NP hard (Wagner and Wagner 1993;Von Luxburg 2007).
an approximate solution can be found by relaxing the constraints on the elements of H in constraint 4.
contrasting
train_7083
The tt constraints are designed to reinforce this principle.
introducing the st type of constraints allows us to investigate to what extent explicitly reinforcing the source domain features in clustering allows to harvest more target domains associated with the source.
contrasting
train_7084
The TT constraints are designed to reinforce this principle.
introducing the TS type of constraint allows us to investigate to what extent explicitly reinforcing the source domain features in clustering allows us to harvest more target domains associated with the source.
contrasting
train_7085
Incorporating new seed expressions is thus likely to increase the recall of the system without a considerable loss in precision.
creating seed sets for new languages may not always be practical.
contrasting
train_7086
Another important reason AGG fails is that it by definition organizes all concepts into a tree and optimizes its solution locally, taking into account a small number of clusters at a time.
being able to discover connections between more distant domains and optimizing globally over all concepts is crucial for metaphor identification.
contrasting
train_7087
The performance of AGG in the identification of metaphorical expressions is higher than in the identification of metaphorical associations, because it outputs only few expressions for the incorrect associations.
wN tagged a large number of literal expressions due to the incorrect prior identification of the underlying associations.
contrasting
train_7088
The differences in performance across languages are mainly explained by the differences in the quality of the data and pre-processing tools available for them.
both our quantitative results and the analysis of the system output confirm that all systems successfully discover metaphorical patterns from distributional information.
contrasting
train_7089
The use of annotated metaphorical mappings for supervision at the clustering stage does not significantly alter the performance of the system, because their patterns are already to a certain extent encoded in the data and can be learned.
metaphorical expressions are a good starting point in learning metaphorical generalizations in conjunction with clustering techniques.
contrasting
train_7090
Evaluation was performed in terms of measuring the acceptance of the "main argument" using the automatically recognized entailments, yielding an F 1 score of about 0.75.
to our work, which deals with micro-level argumentation, Dung's model is an abstract framework intended to model dialogical argumentation.
contrasting
train_7091
We observed that they contain many artificial controversies or non-sense topics (for instance, createdebate.com) or their content is professionally curated (idebate.org, for example).
we admit that debate portals might be a valuable resource in the argumentation mining research.
contrasting
train_7092
The role of backing is to give additional support to the warrant, but there is no warrant in our model anymore.
what we observed during the analysis was the presence of some additional evidence.
contrasting
train_7093
We also observed documents in our data that were purely sarcastic (the pathos dimension); therefore logical analysis of the argument (the logos dimension) would make no sense.
given the structure of such documents, some claims or premises might also be identified.
contrasting
train_7094
2010) (In some cases, inclusion can work fantastically well., For the majority of the children in the school, mainstream would not have been a suitable placement.).
most claims that are used for instance in the prayer in schools arguments are very direct, without trying to diminish its commitment to the conveyed belief (for example, NO PRAYER IN SCHOOLS!...
contrasting
train_7095
All the topics except private vs. public schools exhibit similar amounts of verifiable non-experiential premises (9% to 22%), usually referring to expert studies or facts.
this type of premise has usually the lowest frequency.
contrasting
train_7096
In principle, one document can contain multiple independent arguments.
29 only 4% of the documents in our data set contain arguments for both sides of the issue.
contrasting
train_7097
single token does not convey enough information that could be encoded as features for a machine learner.
as discussed in Section 4.4.5, the annotations were performed on data pre-segmented to sentences and annotating tokens was necessary only when the sentence segmentation was wrong or one sentence contained multiple argument components.
contrasting
train_7098
Second, as compared to the negative experiences with annotating using Walton's schemes (see Sections 4.4.1 and 3.1), our modified Toulmin model offers a trade-off between its expressiveness and annotation reliability.
we found that the capabilities of the model to capture argumentation depend on the register and topic, the length of the document, and inherently on the literary devices and structures used for expressing argumentation as these properties influence the agreement among annotators.
contrasting
train_7099
Ruppenhofer and Rehbein (2012) argue that a frame-based representation of evaluative language is suitable for capturing multi-word evaluative expressions and idioms such as give away the store and sentiment composition.
apart from using semantic frames for identifying the topics (or targets) of sentiment (Kim and Hovy 2006) and deriving an intensity-based sentiment lexicon (Raksha et al.
contrasting