id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_100100
Algorithm 1 shows the expansion to structured output learning with kernel.
when the average cost is used, the utterance position does not change results much.
neutral
train_100101
This yielded 612 pairs, each of which was judged by a linguist 7 as being a pair of false friends, partial cognates, or true cognates.
nakov & Pacovski [31] tried various combinations of these statistics; in their experiments, the bestperforming formula was the following one: now, note that we have the following inequalities: having a high number of co-occurrences #(w bg , w ru ) should increase the probability that the words w bg and w ru are cognates.
neutral
train_100102
Let us consider the cases where R = {data base} and we have the following output lists: O 1 = {data base, data bases}, O 2 = {data bases} et O 3 = {data base, table of content}.
in biology, it was reported that extractors often propose incomplete terms, such as core rna, which are nevertheless kept in a modified form (core rna polymerase) by terminologists [2].
neutral
train_100103
The pattern selection process has been carried out for each system configuration, and evaluated on the test data.
insertion, deletion and substitution) which are required to transform T into H. Each edit operation on two text fragments A and B (denoted as A − → B) has an associated cost (denoted as γ(A − → B)).
neutral
train_100104
For example, in Figure 1, "tree" is encoded by "001", "node" is encoded by "011" and so on.
by using PartialGen, b is grown by appending every word in s into every title in b to make a list of titles of length j.
neutral
train_100105
Section 5 presents our experimental results with discussion about the results.
the output of this step is a tree of titles which is called a table-of-contents.
neutral
train_100106
Lexical forms are even more "suspicious" if present in non-parsable along with forms "cleared" by their presence in parsable ones [9].
it is essential for the formalisms to be compared to various kind of languages and practical tools in order to adapt and extend them.
neutral
train_100107
In practice, this means that we should expect the system to rate an 'average' grade as something else in 2 out of 241 cases (0.83%), a relatively minor error rate that we expect to accommodate in the subsequent (and also classification-driven) stages of the generation process.
within-sentence structuring is performed as follows.
neutral
train_100108
This may in principle seem counter-intuitive, as the textual realisation of one message could hinge on whether others are realised or not, but such dependencies were not observed in our We are interested in two particular aspects of Document Structuring (and which to some extent cover aspects of Microplanning in the standard pipeline NLG architecture in [1] as well): the task of organising the content messages computed in the previous Content Determination stage into sentences, and then organising these sentences in a global rhetorical structure.
meanings could span over entire sentences or even paragraphs.
neutral
train_100109
Table 3 reports the KSMR results for all possible combinations of the probability distributions used for f 3 (Uniform (U) and Geometric (G)) and for f 5 (Uniform (U), Geometric (G), and Poisson (P)).
the expanded hypotheses are stored into a stack data structure which allows the efficient exploration of the search space.
neutral
train_100110
The concept of partial phrase-alignment is similar to the concept of complete phrase alignment described in section 4.
7, a very straightforward technique can be proposed for finding the best phrasealignment of a sentence pair (f , e).
neutral
train_100111
In this paper, we also focus on the IMT approach to CAT.
let us suppose that we are covering the source phrasẽ f ≡"lista de recursos" given by the source positions u ={4,5,6}.
neutral
train_100112
It is also important to look at less-spoken but still prominent languages (Figures 5 and 6).
if we look at some top-priority issues of today -such as health, economy, homeland security, stem-cell research, science teaching -and pressing research questions, such as how to enhance child development and learning and even how to make sense of the huge amount of information with which we deal daily, we can say that future topics will be so complex as to require insights from multiple disciplines.
neutral
train_100113
Node OBJ4 represents the last embedded object clause, which has the trace as an immediate constituent, the infinitive clause aller in our example.
according to the table, there are two kinds of interactions: • Linear interactions: a linear interaction occurs between exactly one positive feature f → v 1 and one negative feature f ← v 2 to combine in a saturated feature f ; in this way, both features become saturated.
neutral
train_100114
(4) as follows: From the last equation, we can see that, if we multiply the values of all attributes by the constant a, a = 0, the classification decision will remain the same (provided that we use no smoothing).
the figure further shows that using 1+q has little effect, i.e., useless attributes are not penalized enough.
neutral
train_100115
In a random partition with k parts (clusters), for each word in a pair the probability for the other word of being in the same cluster is 1/k.
the resulting word vectors will be similar for words that appear in similar contexts.
neutral
train_100116
Each classifier was trained per relation as in the previous experiments, but this time as negative examples we considered only those belonging to the corresponding contingency set.
it is interesting to note that most of these contingency sets involve Part-Whole.
neutral
train_100117
Table 2 Global precision figures P avg @ 1, P avg @ N and MAP (mean average precision) for Experiment Sets 1, 2 and 3 (automatic evaluation) are presented in Tables 3, 4 and 5.
since our approach relies only on minimal linguistic processing, the results presented can be considered a baseline for other methods that try to perform the same task, using additional linguistic information.
neutral
train_100118
From the OpenOffice thesaurus we collected (verb → list of synonyms) mappings for 2,783 verbs, each having 3.83 synonyms in average.
results achieved by combining the cosine distance with the Mutual Information weighting function suggest the low frequency features carry most of the information regarding verb similarity.
neutral
train_100119
Results achieved by combining the cosine distance with the Mutual Information weighting function suggest the low frequency features carry most of the information regarding verb similarity.
automatic methods usually involve a large set of parameters, whose impact on final results is difficult to assess, and thus to optimize.
neutral
train_100120
Size-bounded partitioning, on the other hand, can be used to obtain pre-clusters of manageable sizes, while at the same time committing much less understemming errors.
if we consider longer prefixes, more morphologically related word-forms will end up being assigned to distinct pre-clusters.
neutral
train_100121
We omitted word combinations (get happy), collocations and idioms.
we study Russian and Romanian emotional words.
neutral
train_100122
On the other hand, words which denote abstract concepts as sensations, feelings and emotions keep more sound semantics in their forms (anger, joy, agitation).
on the Russian and Romanian data sets, the applied algorithms performed considerably better than baselines.
neutral
train_100123
Starting in 2000, for non-English speaking regions, the growth has surpassed 3,000 % compared with the over-all growth of 342%.
for example, the sound z is present in Russian words with meaning of amazement; the transliterated sound sh can be found in Russian words representing a kind of stupefaction (there is no absolutely precise translation of these words in English).
neutral
train_100124
Our goal was to build domain-independent rules that do not rely on domain content words and emotional words.
it eliminates the bias introduced by the length of the text.
neutral
train_100125
This method can also be applied to analyze texts which do not explicitly disclose affects such as medical and legal documents.
the authors did not actually analyze the texts or language cues contained in the reviews.
neutral
train_100126
However, our concern is on the accuracy of the taggers instead of their speed and memory requirement.
the SVMM0C0 has been trained with the same data that has been used to train the tnt-based tagger, tag-ger2.
neutral
train_100127
The results show that the dialogue act labelling task can be improved by including the probability distribution of the number of segments.
to corroborate this problem, we calculated the precision, recall and F-measure of the experiments.
neutral
train_100128
The confidence interval for this experiment and confidence interval of the baseline show that the difference between the results given by the models are statistically significant.
the last element P (S c ) is estimated by another gaussian distribution that is computed from all the turns: the mean m Sc and variance σ Sc are computed from all the scores in the training data.
neutral
train_100129
We report in Table 2 The average sentence length in the Europarl corpus is more than double than that in the MultiBerkeley corpus due to the different selection strategy of the sentences.
in general, the two results reflect the different goals of the two approaches: [1] are interested in investigating and adopting unsupervised techniques with poor semantic and syntactic information to automatically annotate a large scale (but noisy) training set and exploit it for semantic role labelling.
neutral
train_100130
If possible, we preferred Italian translations minimizing divergences with English.
as for MultiBerkeley, we could not apply algorithm 1 because it requires the source sentences to be represented as syntactic trees, whereas the English FrameNet corpus has annotation pointing to flat chunks without parsing information.
neutral
train_100131
Afterwards, we present the evaluation and results.
the results are then combined and used to derive a similarity metric between short texts.
neutral
train_100132
The former case involves a rather straightforward mapping: since the ontology includes WordNet identifiers a mapping with Cornetto amounts to the retrieval of WordNet identifiers in the database.
it is reported in [13] that 985 concepts from WordNet 1.6 have been assigned the music label.
neutral
train_100133
What is most interesting, however, is that this required so minimal an investment in new data annotation.
indeed, since manual annotation can be costly and time-consuming, it is important to maximize the effectiveness of annotation efforts.
neutral
train_100134
Secondly, as mentioned earlier, if the target word is infrequent, it will have a low in-degree in the dictionary graph, and this is especially the case for TOEFL synonym questions, which tend to test the participants on a morechallenging vocabulary of relatively low-frequency words.
this phenomenon provided evidence for the adaptability of dictionary-based methods in different domains or cultures.
neutral
train_100135
Given W 0 and P 0 , we now follow this procedure for synonym extraction: 1.
one of the objectives of PbE is to discover such patterns.
neutral
train_100136
This is different from the dictionary graphs in IIE and its variants, which relate a definiendum to all its definientia.
dictionary definition texts, as a special form of corpora, can provide a better "controlled" environment for synonym distribution and thus, it would presumably be easier to find characteristic features specific to synonymy within definition texts.
neutral
train_100137
It is only surpassed by the recall value of random sampling with the lowest ratio of 1:1.
german is morphologically richer than English, for which most of the work has been done, and it possesses grammatical gender.
neutral
train_100138
by including examples from all areas of the search space.
we did optimize the classifiers' parameters.
neutral
train_100139
The supervised keyphrase extraction systems use the extraction model obtained from the training data to classify the candidates into keyphrases and rank them according to their importance in the document.
we collected three publicly available datasets with different properties, which allows comparison of the applicability of keyphrase extraction algorithms to those datasets.
neutral
train_100140
Its features are tuned using a genetic algorithm.
supervised approaches use a corpus of training data to learn a keyphrase extraction model that is able to classify candidates as keyphrases.
neutral
train_100141
The DUC dataset [24] consists of 308 documents from DUC2001 that were manually annotated with at most 10 keyphrases per document by two indexers.
they differ in length and domain (see table 1), and can thus be used to assess different properties of keyphrase extraction algorithms.
neutral
train_100142
For example, if we always extract 10 keyphrases, but a document only has 8 gold keyphrases assigned, then 2 extracted keyphrases will always be wrong.
the total number of selected approximate matchings is 566, as some matchings were included in multiple sets of the random matchings and morphological approximate matching Morph did not always account for 100 approximate matchings per dataset.
neutral
train_100143
Let synsets(w) denote the set of WordNet synsets for a word w, and let CW be the set of candidate words used by S10.
for instance, we do not attempt to extract representations for durative or repetitive events, or actions like escalate or accelerate that change quantities or numerical attributes.
neutral
train_100144
If the events are related to the main category of the article, only knowing the article category is enough.
as the event extraction system uses a supervised model, it is natural to ask whether supervised topic features are better than unsupervised ones.
neutral
train_100145
Unsupervised LDA performs best of all, which indicates that the real distribution in the balanced corpus can provide useful guidance for event extraction, while supervised features might not provide enough information, especially when testing on a balanced corpus.
hunting-related or shooting-contest-related activities should not be tagged as Attack events.
neutral
train_100146
For example, the word "attack" is very likely to represent an Attack event while the word "meet" is not.
2009) to outperform SVMs when extracting tag-specific document snippets, and is competitive with SVMs on a variety of datasets.
neutral
train_100147
Finally, once all arguments have been assigned, the trigger classifier is applied to the potential event mention; if the result is successful, this event mention is reported 3 .
this is not always enough.
neutral
train_100148
We compare this unsupervised approach to a supervised multi-label text classifier, and show that unsupervised topic modeling can get better results for both collections, and especially for a more balanced collection.
we find that the gains are unevenly spread across different events.
neutral
train_100149
In the entertainment domain, the first negative seed removes the strongest bad rule.
we can cautiously conclude that the underlying minimally supervised bootstrapping approach to IE is not necessarily doomed to failure for domains that do not possess beneficial data sets for learning.
neutral
train_100150
A systematic analysis studies the respective data properties of the three domains including the distribution of the semantic arguments and their combinations.
the second negative seed does not lead to big jump in the performance.
neutral
train_100151
Unfortunately, one is often constrained by the lack of resources, tools or language experts, for instance when dealing with resource-scarce languages.
it usually requires a part-of-speech (POS) tagger and a training corpus annotated with shallow parsing tags.
neutral
train_100152
The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013 under Grant Agreement n° 248005 11 .
it is not clear how the absence of any part-of-speech tagger should hamper the development of other natural language processing tools.
neutral
train_100153
The extended lexicon was used in combination with the extended transitions based on trigrams from section 5.3.
tagger (MElt) (Denis and Sagot, 2009) is a conditional sequence maximum entropy POS tagger that uses a set of lexical and context features, which are a superset of the features used by Ratnaparkhi (1996) and toutanova and Manning (2000).
neutral
train_100154
From the error analysis, we can conclude that the added words do not correspond to the words that are needed in the test domain, which means that the HCRC map task corpus data are not similar enough to the CReST data.
the results of the previous sections show that adding information on which taggers trained outof-domain agree is useful for moderately improving tagging accuracy and especially for reestimating transition probabilites.
neutral
train_100155
In addition to components for visual perception and action execution, DIARC consists of five NLU components.
the parsing task is more difficult than in other experiments.
neutral
train_100156
For each MICA elementary tree t j and XTAG elementary tree t i the observation probability matrix (B = [b i, j ]) also contains the probability P(t j |t i ).
as it shows, here also the M-2 gives a better response in compare to the M-1.
neutral
train_100157
it comes at very end of the clause as shown in the example with bold-face: On the contrary, if an adverb is used as a complement of verb then it comes before the main verb, as shown in the following example: The lexical category V has three forms (corresponding to perfective/imperfective aspects and subjunctive mood).
the structure of VPHForm makes sure that we preserve all inflectional forms of the verb.
neutral
train_100158
Evaluating a resource grammar is just like evaluating a software library in general.
cN is a syntactic category, which is used to deal with the modifications of nouns by adjectives, determiners, etc.
neutral
train_100159
For instance, if a sentence or a question sentence is a complement of the verb then it takes a different position in the clause; i.e.
the current systems only seem to provide partial solutions, mainly because of the vocabulary differences (Humayoun and Ranta, 2010).
neutral
train_100160
This covers only three main types of noun phrases, but there are other types of noun phrases as well, i.e.
: the way this rule is implemented may vary from one language to another; as each language may have different word order and/or agreement rules.
neutral
train_100161
To obtain these scores, we multiply, respectively, the sentence values for the Pat-tFreq, CatFreq and CatOcc features by the ObjSim feature value.
for generating such summaries, we took into account the type of information users reflect when writing summaries of this particular domain.
neutral
train_100162
Regarding the readability assessment, Table 5 showed that our approach obtains close results to the human performance in Aker and Gaizauskas (2010b).
to construct them we adopted the dependency relational patterns extraction described by Aker and Gaizauskas (2010a).
neutral
train_100163
Our results indicate that our approach achieves high performance both in ROUGE and manual evaluation.
tS is an especially challenging Natural Language Processing (NLP) task, since the generation of summaries depends on a wide range of issues, such as the summarization input, output or purpose.
neutral
train_100164
The weight w i of each hyperedge is given by the context sensitive discriminative model discussed in section 4.4.
each triple consists of two concepts (or instances of concepts) connected by a relation.
neutral
train_100165
A hypergraph (H) is a generic graph wherein edges can connect any number of vertices and are called hyperedges.
the total number of sentences in the answers is 1862, i.e., 2.596 sentences per an answer.
neutral
train_100166
4 The Apertium linguistic data contains 326 228 entries in the bilingual dictionary, 106 firstlevel rules, 31 second-level rules, and 7 third-level rules for Spanish-English; and 21 593, 169, 79 and 6, respectively, for Breton-French (see section 2.2 for a description of the different rule levels).
the results show that our hybrid approach outperforms both pure RBMt and PBSMt systems in terms of BLEU.
neutral
train_100167
Thus, as soon as the PBSMT system learns reliable information from the parallel corpus, Apertium phrases become useless.
table 5 shows the proportion of RBMtgenerated phrases used to perform each translation.
neutral
train_100168
We focus on alleviating the data sparseness problem suffered by phrase-based statistical machine translation (PBSMT) systems (Koehn, 2010, ch.
the difference is statistically significant only under certain circumstances.
neutral
train_100169
It is always important to highlight that post-edited translations that do not match a reference are not necessarily bad as they could still be valid paraphrases.
when BLEU is computed with multiple references, even though the translations from all systems may differ from what was originally expected (ref 0 ), they can still be valid alternative translations that often match the choices made by other translators (ref 1−17 ).
neutral
train_100170
The code consists of a library that implements the actual text matching, and of a number of source files to demonstrate how to match entities in a text and how to extract the entity information from the NE resource file.
the matching software, after reading and analysing the NE resource file, searches for any of the known entities in multilingual text.
neutral
train_100171
1 We refer to "source" and "target" language for convenience only-our models are symmetric, as will become apparent.
we present a baseline model and several successive improvements, using data from the Uralic language family.
neutral
train_100172
The parameters L(e), or P (e), for every observed event e, are computed from the change in the total code-length-the change that corresponds to the cost of adjoining the new event e to the set of previously observed events E: Combining eqs.
we use the principle of recurrent sound correspondence, as in much of the literature, including the mentioned work, (Kondrak, 2002;Kondrak, 2003) and others.
neutral
train_100173
2 To model also insertions and deletions, we augment both alphabets with the empty symbol, denoted by a dot, and use Σ .
no gold-standard alignment for the Uralic data currently exists, and building one is very costly and slow.
neutral
train_100174
248347, and by the Slovenian Research Agency, grant no.
we use context-dependent cognates because calculat-ing cognates between all lemmata of specific parts of speech proved to be very noisy even on high cognate thresholds and it did not have a positive impact on this task.
neutral
train_100175
Contexts are chosen from each interval in a round robin fashion in order of least entropy from each group.
we hypothesis that if this is the case, an independent measure of a cluster's sentiment will show a high likelihood that a cluster to be either positive or negative in sentiment overall or be a mixture of positive and negative sentiments so that the overall sentiment is neutral.
neutral
train_100176
We perform POS tagging with the Stanford parser (Levy and Manning, 2003).
after analyzing the wrongly classified examples, we found that the Chinese word segmentation errors propagate to sentence boundary detection task.
neutral
train_100177
To detect the topic clause is difficult.
all the Chinese characters in a paragraph are successive (one by one) without word, clause, and sentence boundaries.
neutral
train_100178
Many connectives (such as because or for instance) always signal one specific discourse relation.
on one hand, this feature can detect co-taxonomic pairs such as current-new or risefall (as well as nontaxonomic relations such as accident-injured) whenever these occur very frequently.
neutral
train_100179
Work on explicit (i.e., connective-bearing) relations has emphasized simpler features, such as the syntactic neighbourhood of the connective or features based on tense and mood of the argument clauses (Miltsakaki et al., 2005).
section 3.1) are improved by the rich set of features.
neutral
train_100180
Ambiguous temporal markers such as after, as or while usually occur with a purely temporal reading, but also with additional non-temporal discourse relations, such as causal and contrastive readings.
the best feature set achieves F-measures of 0.41 (contrast), 0.39 (parallel) and 0.33 (evidence) on these relations, with precision values between 0.33 (evidence) and 0.36 (contrast), and recall values between 0.33 (evidence) and 0.47 (contrast).
neutral
train_100181
We firmly believe that our results in detecting noun compounds and named entities can be fruitfully applied in other higher-level applications as well in e.g.
the keyphrase extractor can still profit from already known NEs: in one case, they can be excluded from the set of keyphrase aspirants while in the other case, they are proper keyword candidates.
neutral
train_100182
no phrase was regarded as a keyphrase aspirant if it occurred only in the References part of an article.
the mwetoolkit basically does not mark MWEs in the raw text, it just extracts noun compounds from the text, i.e.
neutral
train_100183
Demonstrative NPs are mapped to nominal NPs by matching their heads (e.g.
bridging anaphora was not resolved by the system in these experiments (but still included in the evaluation) which might be a reason for the relatively low recall.
neutral
train_100184
Regarding the statistics about the words, it is worth noting that the documents in Romance languages (Spanish and French) have similar characteristics.
in the current society, information plays a crucial role that brings competitive advantages to users, when it is managed correctly.
neutral
train_100185
Briefly, the main features of this approach are: i) redundant information is detected and removed by means of textual entailment; and ii) the Code Quantity Principle (Givón, 1990) is used for accounting relevant information from a cognitive perspective.
in this way, we use LingPipe 3 for English, the illinois Named Entity Tagger 4 (Ratinov and Roth, 2009) for French, the NER for German 5 proposed in (Faruqui and Padó, 2010), and Freeling 6 for Spanish.
neutral
train_100186
News-Gist (Kabadjov et al., 2010) is a multi-lingual summariser that achieves better performance than state-of-the-art approaches.
the top levels were populated by the sentences that appeared in the most languages and the bottom level contained sentences appearing in the least number of languages.
neutral
train_100187
This is done in a straightforward way: the polarity of the aggregate opinion is computed as the average of the polarity of all the opinions from the source on the topic.
extraction/Coreference Resolution Next, our system labels fine-grained opinions with their topic and decide which opinions are on the same topic.
neutral
train_100188
In case of fusing event type information, it turned out that for 5 out of 51 events in our corpus none of the mono-lingual systems was able to assign any type information.
as can be observed, a gain of 4-5% and 8% in precision and recall could be obtained for the extraction of event type and numerical slots respectively.
neutral
train_100189
Therefore, it is very hard for a learning algorithm to precisely determine the relation types.
second, the results of EM 1 , EM 2 , and EM 3 are the results of our proposed method with three different initializations.
neutral
train_100190
As it is shown, NONE relations have had a great impact on the accuracy of the system.
these methods are slow and costly.
neutral
train_100191
They report consistent gains on argument classification by combining models based on different similarity metrics.
chunk or Base Phrase: the tokens and POS tags within every base phrase were collapsed into a surface and a POS span, respectively.
neutral
train_100192
We present an approach for Semantic Role Labeling (SRL) using Conditional Random Fields in a joint identification/classification step.
disappointingly our system performs worse than that by Mitsumori et al.
neutral
train_100193
Following, we present the results after analyzing the entire corpus of the Senseval-2 competition.
to do this we use the Equation 2 where we have substituted the variable w (word) with the variable sw i , (sense) where sw i indicates the i-th sense of word w. we convert each RSt in a vector.
neutral
train_100194
Given the nature of the corpora provided for the 1st International Competition on Plagiarism Detection, we cannot apply them to test a speller sys-tem given that the plagiarisms are automatically generated and therefore they do not contain misspellings (Potthast et al., 2009).
there-fore, it would be beneficial to apply a spell corrector over the documents in our corpora, such as the one described in (Gao et al., 2010).
neutral
train_100195
AttP: In the final setting, singleton pronouns are attached to an antecedent.
table 1 shows the results for both baselines.
neutral
train_100196
Entity Instantiations are not considered in the MUC and ACE annotation schemes, which consider relationships between different types of entity, such as those between persons and locations, rather than our groups and instances of entities of the same type.
our work is also related to the problem of bridging anaphora.
neutral
train_100197
This of course explains why there has been an increasing amount of research both on named entity recognition and on relation analysis in the last 20 years (MUC6, 1995;Appelt and Martin, 1999).
the success rate is often above .9 or even .95 F-measure for major categories (person's names, location's names) in newspapers (Collins and Singer, 1999).
neutral
train_100198
In addition, we also test 1%, 5%, 10%, 25% and 50% of the training set.
very few high-level features describing the presence of certain semantic classes or opinion words perform consistently well across different domains.
neutral
train_100199
Sometimes it was not unequivocal to decide whether a multiword unit is a noun compound or a NE.
the CRF+OwnNE+SF row in table 6 represents results achieved when the NEs identified by using the entire Wiki50 as the training dataset functioned as a feature.
neutral