id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_92900
First, we compare the proposed method to both a traditional MST parser (McDonald et al., 2005), and a deterministic parser (Nivre and Scholz, 2004).
while statistical parsers achieve higher and higher accuracies on in-domain text, the creation of data to train these parsers is labor-intensive, which becomes a bottleneck for smaller languages.
neutral
train_92901
Histograms for the 32K page size index -omitted here for lack of space -look similar to Figure 4 and Figure 5, but the compressed index shows a very marginal increase in the number of faster forests and a clear increase in the time for the slower forests.
our experiments test the performance of search with page size and compression of keys.
neutral
train_92902
We feel that the most likely reason in this case is the first one.
the average forest search time with the baseline setup is compared with the best breadth-first index based search method in table 2.
neutral
train_92903
Once a source language sentence is parsed into a packed forest and pruned, the next step is to find trees in the forest that have matching rules in the translation rule dictionary.
to the best of our knowledge, no index-based search structures for incrementally finding trees have been proposed to date.
neutral
train_92904
The differences between languages can be seen in Figure 1, which shows an example of English-Japanese.
we incorporate dependency relations of words into the alignment model and define the reorderings on the word dependency trees.
neutral
train_92905
EXPAND-1 does not have any restrictions on its operation.
combine the subtrees in each language so as to create parallel sentences.
neutral
train_92906
This is the same for the TOGGLE operation.
we introduce a pseudo parent to capture the relations.
neutral
train_92907
Given a paraphrased sentence T, which consists of J words, the novelty function Novel(TM,T,n,j) judges whether the occurrence of t j generates a new n-gram to the translation model (TM) according to the prior n-1 words of t j .
the toolkit supports extraction of both phrasal paraphrases and syntactically constrained paraphrases.
neutral
train_92908
The other problem is that the context is not considered during phrasal paraphrase substitution, which causes a low paraphrasing accuracy.
the sizes of the augmented phrase tables are increased by 56% and 171% for the 29k set, 25% and 132% for the full set, which prove that our sentence novelty model has made considerable contributions to the enrichment of phrase tables.
neutral
train_92909
The language resources were randomly split into three subsets for the evaluation of translation quality (eval, 1000 sentences), the tuning of the SMT model weights (dev, 1000 sentences) and the training of the statistical models (train).
all data sets were case-sensitive with punctuation marks preserved.
neutral
train_92910
In general, exact optimization of ILP is NP-Hard.
the SMt system is bootstrapped using the seed training corpus S. the held-out set D is decoded by the SMt to obtain 1-best translation hypotheses.
neutral
train_92911
If k sentences must be selected in a given active learning iteration, we use the greedy algorithm to prune the problem by choosing k > k sentences from the corpus, and subsequently construct the ILP on this smaller problem to select the required k sentences.
it is the knowledge of which source words were translated incorrectly that is useful for sample selection.
neutral
train_92912
Indices of the following word chunks C k (j + 1 ≤ k ≤ |C|) are incremented by 1, and the separated word cew j becomes new C j+1 .
as for Semi-Markov, L is 10 and K is 191 in this experiment.
neutral
train_92913
The numbers after L= indicate the maximum length of the word-chunks for Semi-Markov perceptron.
precision is defined to be the number of correctly recognized NEs divided by the number of all recognized NEs.
neutral
train_92914
In addition, our method can use features extracted from word chunks that cannot be obtained in word-based NE recognitions.
in addition, our algorithm requires a lower maximum memory size than Linear-Chain and Semi-Markov.
neutral
train_92915
Our proposed method showed good performance in this experiment, however, there is a drawback due to our recognizing strategy.
we limit the maximum length of the word-chunks.
neutral
train_92916
Then we split the converted training data into five portions.
the computational cost will be higher than ever, when we recognize a large types of classes like Sekine's extended NE hierarchy that includes about 200 types of NEs for covering several types of needs of IE, QA, and IR (Sekine et al., 2002).
neutral
train_92917
For example, in Figure 1, an EL system must return NIL for "C9 hasName(i, n) hasFirstWord(i, w), hasLastWord(i, w), isBlacklisted(w): the word sequence w is blacklisted, containsMoreSpecificMentions(i): the i-th gene mention collocates with more specific gene mentions in the current context.
entrezGene (Maglott, Ostell et al.
neutral
train_92918
Formula L.2 Formula L.2 is a hard constraint that must always hold whereas the others are soft and can be violated.
such an evaluation is more relevant to IE tasks such as the bio-molecular event extraction.
neutral
train_92919
This shows that the saliency property is effective in instance-based evaluation.
we employed the greedy backward sequential selection algorithm (Aha and Bankert 1995) to select the optimized feature sets for ME NF with ten-fold cross validation on the training dataset.
neutral
train_92920
Among them, 112 NEs refer to the two focused NEs, and less than 10 NEs in the comments refer to the other three NEs in the news article.
using no entities (K=0) and using all entities (K=100) will much lower the overall performance, which demonstrates that it is important to leverage appropriate named entities for feature generation in our proposed method.
neutral
train_92921
The classification results are then used to cluster words into cognate sets.
in Section 4, we describe our method of clustering cognates.
neutral
train_92922
We thus find the SUPERVISED LPF method not only impractical, but also rather unreliable.
our solution is to introduce a set of binary language pair features, one for each pair of languages in the data.
neutral
train_92923
• CB2: selects the most frequently observed synset in the training data among the outputs of B1-B3.
finally, the synset which has the largest SVM score is selected as appropriate synset for trg.
neutral
train_92924
The Japanese WordNet does contain other meanings for the term anime, such as a hard copal derived from an African tree (synset 14896018-n) and any of various resins or oleoresins (synset 14766265-n).
experiments show that the proposed method can identify synsets for 2,039,417 inputs at precision rate of 84%.
neutral
train_92925
For Malt, we obtain an absolute LAS increase of 8.8% on the discussion forum data and an improvement of 5.6% on the Twitter data.
some progress has also been achieved in adapting parsers to new domains using semi-supervised and unsupervised approaches involving some labelled source domain training data, little, if any, labelled target domain data and large quantities of unlabelled target domain data.
neutral
train_92926
What is needed is a sentence-splitter tuned to the punctuation conventions of Twitter.
one approach to the parser domain adaptation problem is to train a new system using large quantities of automatically parsed target domain text.
neutral
train_92927
Unless otherwise stated, comparisons were significant with p < 0.05.
to our knowledge, our experiments are the largest in supervised MWE-token classification to date.
neutral
train_92928
On typespecialised classification, our new idiom and WSD features achieve more consistent gains.
feature count cutoff of one was used to filter out uninformative features.
neutral
train_92929
Some MWE-types have only an idiomatic meaning, such as the English greeting How do you do?, interpretation of which can be perplexing if attempted literally.
each instance in the corpus was preprocessed by running it through KNP to extract specific linguistic information.
neutral
train_92930
have exclusively idiomatic meaning, other MWEtypes like the phrase kick the bucket may be idiomatic or literal depending on context.
the data show a definite positive trend with the number of instances, reaching 0.884 under a cap of 650 instances (and 589 average actual instances) per MWE-type.
neutral
train_92931
In first order logic, if there is at least one predicate that violates formulas in a possible world 4 , the world is not valid.
figure 2: Example of a question sential question is the latter sentence.
neutral
train_92932
In this model, we used the observed predicates for our model as features and trained a binary classifier for the questionanswer relation and a one-versus-rest classifier for the answer-answer relation.
we incorporate identification of these superrelations into our model.
neutral
train_92933
In detail, we first extract pairs of queries and user-clicked questions from query logs, with which we induce question generation templates.
first, search engine query logs are powerful data for the research of query-toquestion generation, from which we have acquired a large volume of question generation templates.
neutral
train_92934
The rest errors are questions that are well formed but less likely to be asked by people.
for example, query "故 宫 门 票 (The Imperial Palace / ticket)" can instantiate where c(Qr, T p i ) is the frequency that Qr instantiates T p i in the query logs, c(Qr) is the frequency that Qr occurs in the query logs.
neutral
train_92935
The extended class sequential rule model is called ExCSR hereinafter.
if there are one more than such rules, then choose the one with the least Distance-Constraint value; Step 4: R' = R'  {r}; R = R -{r}; Step 5: D = D-{instances satisfied by r}; Step 6: if D is empty, then return R'; Step 7: For each rR, update support(r) with regards to the updated D; Step 8: go to step 2.
neutral
train_92936
For this purpose, we further to define a measure as Equation (2): Where label_value(i) denotes the value of the i-th distance label (the value of [NEIGH], [NEAR], [ANY] is set to 2, 1, 0.5 separately), and k is the total number of distance labels used in rule r. Obviously, shorter rules with simpler distance constraint are more general and preferred.
step 1: step 2:  = Ds-sPM(<>, 0, s); step 3: for each  in  (a) count the frequencies of all covered classes (the classes that the covered sequences belong to), and find the most frequent class label yY; (b) if support(y)  min_sup and confidence(y)  min_conf then output y.
neutral
train_92937
Example rules with same support and confidence Rule 1 and 2 are with the same confidence 100% and support 32 thus have the same interestingness.
most of the work follows the automatic methods, which will be also adopted in our work.
neutral
train_92938
(2009) ranked questions with several CQA specific features.
among the 12, 880, 882 questions we have collected, only 1.87% occur more than once.
neutral
train_92939
This was repeated for 4 rounds (400 words), which took approximately 1.5 hours total.
overall, we believe that the existing online communication framework proved quite effective in rallying and organizing members for the project.
neutral
train_92940
Overall, we believe that the existing online communication framework proved quite effective in rallying and organizing members for the project.
in addition, this allowed us to clarify the framework required, and develop tools that can be used to allow for provision of information on an even larger scale in future disaster situations.
neutral
train_92941
We chose to verify and match the information by hand to prevent the provision of misinformation on a sensitive topic such as safety of earthquake victims.
the largest challenge in the project organization was the initial underestimation of the outpouring of support that the project would see.
neutral
train_92942
This method requires labeled training data which is often difficult and expensive to produce.
overall, the product data cleansing solution is achieved by a collection of rule sets, each tackling a given product vertical.
neutral
train_92943
The output of these rules are used to populate Data Warehouses and Product Information Management (PIM) systems.
finally, a domain expert writes rules to move entries into appropriate database columns and complete the standardization.
neutral
train_92944
Section 4 provides details of the proposed generative model for extracting bilingual topic hierarchies from an unaligned bilingual corpus.
the internal nodes formed during the hierarchical clustering process shares words with their children.
neutral
train_92945
Based on the common structure of Q&A forum threads, we will use the following definition to capture near-duplicate threads: Definition 1 Near-Duplicate Thread -Two threads are near-duplicate with each other in Q&A forum: (1) if both of their question and answer parts are the same with each other, or (2) if their question parts are same and one of the answers contains additional information compare to another answer.
figure 6 shows an example of it.
neutral
train_92946
Gates → Gates (disambiguation)).
in the entity linking, if the selected entity in the Wikipedia KB is not included in the TKB, our system will return NiL.
neutral
train_92947
This system includes 2 components: candidate selection and disambiguation, in which the disambiguation part is based on the graphical method we described in section 3.2.
the NIL accuracies decrease and the overall accuracies have no obvious changes.
neutral
train_92948
We can also see that Sim.
top 10 candidates are kept thereby.
neutral
train_92949
Then, for example, the probabilities of words given t can be obtained using Eq.(5).
this is especially true if there are few relevant documents.
neutral
train_92950
We implemented pseudo RF using our method for n = 10, and compared the results with the three (re-)ranking results described above.
for each document in the search results, we construct a hybrid language model, which we call the HYB-based language model.
neutral
train_92951
, β K ) represents the distributions over words for each topic z k .
estimation with the Newton-Raphson method has the disadvantages that it takes a long time and each estimated α k can be a negative value under certain circumstances.
neutral
train_92952
Of the total 67 person entities in our test data, the IE-pipeline is not able to extract any employment information for 12 of them.
freebase 1 is a freely available online database of structured knowledge.
neutral
train_92953
Generally speaking, the articles in news domain usually contain descriptions of date and events accompanying the publication sequences.
acknowledgments The authors would like to thank the anonymous reviewers for their comments on this paper.
neutral
train_92954
Meanwhile, there are more than two summarization sentences or their similar ones may co-occur in a source document.
if there is no edge from ss k to ss i in G, then add an edge to G; else if weight(ss i , ss k ) is greater than the weight of existing edge, then update the edge.
neutral
train_92955
Given a standard ordering, randomly produced orderings of the same objects would get an average τ of 0.
we believe the source documents can give us some effective and efficient information.
neutral
train_92956
How to conduct an efficient and effective method for sentences ordering is a difficult but important task for both multidocument summarization and other text processing job, e.g.
chronological information cannot be easily extracted from those non-news documents and constructing a large corpus also is not so easy.
neutral
train_92957
An abstract summary on the other hand is a summary where the text has been broken down and rebuilt as a complete rewrite to convey a general idea of the original text.
it should be noted that no preprocessing in terms of stop word removal and stemming were performed during the ROUGE evaluation since the package is tuned for English and no Swedish lexicon for that purpose were available at the time.
neutral
train_92958
This work was partially funded by the LiveMemories project (www.livememories.org).
arg2 Labels The sense of the connective (F2) refers to one of the four top-level classes in PDTB sense hierarchy, namely TEMPORaL, COMPaRISON, CON-TINGENCY and EXPaNSION.
neutral
train_92959
To obtain this baseline, we take into account that i) Arg2 is the argument immediately adjacent to the connective and ii) 90% of the relations in PDTB are ei-ther intra-sentential or involve two contiguous sentences.
as for the relations considered, we focus here exclusively on explicit connectives and the identification of their arguments, including the exact spans.
neutral
train_92960
Our approach is motivated by two intuitions: first, the identification of Arg2 and Arg1 may require different features, since the two arguments have different syntactic and discourse properties, as discussed in Section 3.
we increase the number of features at each step, and report the corresponding performance.
neutral
train_92961
Relational lasso controls such relations underlying features by introducing a penalty parameter for each feature.
the procedure obtains three kinds of learning data input, a relation among features, and parameter values.
neutral
train_92962
On the other hand, our α is estimated given a noisy relation among features, thus it plays a different role from w. In this sense, the way to handle this additional parameter for relational lasso is completely different from adaptive lasso.
this shows that the use of underlying information among features enhances the overall performance.
neutral
train_92963
The relations between features are denoted as R, which is provided to the proposed algorithm and used to estimate α. R denotes a pairwise dependence relation between features.
the target function has two terms, one for fitting and another for regularization.
neutral
train_92964
On the aspect of classification accuracy, MetaTD catches up with the 1-vs-Rest approach.
the experiments are run on four 64-bit computers with multi-core 1.9GHz AMD CPUs.
neutral
train_92965
The slot nodes are used in such a way that the adopted TK function can generate fragments containing one or more children.
include predictions and expectations reported in the press.
neutral
train_92966
While the removal of isArg drops the precision and saves the recall, the removal of delete is the other way around.
first-order logical formulae such as formulae (2) and (3) become the feature templates of MLN.
neutral
train_92967
Because our approach deals with the all three cases by one joint model and ga-case is dominant in the data, it extracts more numbers of ga-case than the others.
they created three separated models corresponding to each of the case; ga (Nominative), wo (Accusative), and ni (Dative).
neutral
train_92968
In practice, we also need to choose a training regime (in order to learn the weights of the formulae we added to the MLN) and a search/inference method that picks the most likely set of ground atoms (PA relations in our case) given our trained MLN and a set of observations.
our method can extract such important information better than previous work.
neutral
train_92969
So, their models neglect the dependencies between cases.
extraction of inter-sentential PA relations which are crossing sentence boundaries is intractable.
neutral
train_92970
Once again, the strongest feature was one of the LSA set, feature 12 (LSA similarity score of the target sentence with all the sentences of the abstract classified as Purpose).
we have measured the agreement between two human annotators by the Kappa statistics over a randomly selected subset of 46 sentences of the corpus and was around 0.7 (see Table 8).
neutral
train_92971
A total of 2,293 sentences were automatically annotated and manually revised.
the noise from the automatic annotation of rhetorical structure does not interfere in the coherence annotation.
neutral
train_92972
In this model, the generative process involves: (1) three subjectivity labels for sentences (i.e., sentence expresses subjective opinions as being positive/negative, or states facts as being objective); (2) a sentiment label for each word in the sentence (either positive, negative, or neutral), and (3) the words in the sentences.
the formal definition of the subjLDA generative process is as follows: In practice, it is quite intuitive that one classifies a sentence as subjective if it contains one or more strongly subjective clues (Riloff and Wiebe, 2003).
neutral
train_92973
We explored incorporating two subjectivity lexicons as prior knowledge for subjLDA model learning, namely, the subjClue 3 and SentiWordNet 4 lexicons.
to most of the existing approaches requiring labelled corpora or linguistic pattern extraction, we view this problem as weakly-supervised generative model learning where the only input to the model is a small amount of domain independent subjective/neutral words.
neutral
train_92974
Hu and Liu (2004) proposed a technique based on association rule mining to extract frequent nouns and noun phrases as product aspects.
given an application domain corpus, the user first provides a few seed resource terms.
neutral
train_92975
A resource in a domain is often an aspect or implies an aspect.
there are some important differences between resources and other types of aspects.
neutral
train_92976
Op, the performance figures remain higher than with the non-contextual Op classifier.
(1.b), (2.b) and (3.c) convey implicit opinions and (4.a) is subjective, but nonevaluative.
neutral
train_92977
In the second step, the EDUs which contain at least one token that belongs to our subjective lexicon were retained for a further segmentation.
using discourse for SA raises new issues: Is sentence/clause subjectivity-based analysis appropriate?
neutral
train_92978
Moving to the clause level is also not appropriate, since several opinion expressions can be discursively related as in The movie is great but too long where we have a Contrast relation or as in Mr. Dupont, a rich business man, has been savagely killed where we have an Elaboration because the appositive gives further information about the eventuality introduced in the main clause.
for both setups, we have used the SVM-light software package 3 .
neutral
train_92979
On the Test data, contextual features detect more subjective (explicit or implicit) EDUs than without them.
the study of this hypothesis falls out of the scope of this paper and is therefore left for future work.
neutral
train_92980
In addition, since we have already built an unsupervised morph analyzer as described in section 3.1, which is acquired by the same bootstrapping algorithm, we would like to make use of them as well, in a straightforward way: as long as a word can be analyzed by the acquired morph analyzer, it should not be output as closed-class items.
we experiment with four lists of closed-class words of the top 200 words acquired by the 'baseline-typeC', 'baseline-tokenC', 'bootstrap-typeC' and 'bootstrap-tokenC' respectively, as introduced in section 3.2.
neutral
train_92981
Finally, to demonstrate the value of such a highly pure list of closed-class words, which previously could not be acquired through unsupervised learning, we plug the acquired list into a previously existing minimally supervised tagging system, (Zhao Marcus, 2009), which only requires a closedclass lexicon for tagging all words.
after running the bootstrapping algorithm for words, we have already obtained a list of words that can be used for filtering the output of the baseline models.
neutral
train_92982
This requests that the pivot transliteration unit must be consistent in the two individual modes.
table 5 reports the performance using the default setting of eq.
neutral
train_92983
For words that do not exist in the dictionary, the decoder still considers every possible tag.
at each step, a REDUCE-LEFT/RIGHT action a adds a set of delayed features to the delayed feature vectors of state ψ: where Φ 1 (ψ, a) and Φ 2 (ψ, a) are the first-/secondorder delayed features generated by action a being applied to ψ.
neutral
train_92984
Note that the above formulation with the delayed features is equivalent to the model with full look-ahead features if the exact decoding is performed.
we propose to use the features shown in Table 3 (b).
neutral
train_92985
Within the other structures, we distinguish three types of features: deletion, permutation and transformation.
knowing what type of interrogative adverb corresponds to an adverb allows it to be categorized semantically), and paraphrasing features (e.g.
neutral
train_92986
not valid) for the lexical entry corresponding to the row.
the tables have been converted into an interchange format, based on the same linguistic concepts as those handled in the tables.
neutral
train_92987
For example, the English Penn Treebank does not represent the syntactic structure of prenominal nominal and adjectival modifiers, even though it is generally assumed that such structure exists.
for each linguistic phenomenon Φ, we ask two questions: (1) is Φ captured in both DS and PS guidelines?
neutral
train_92988
• a second table of conversion functions is consulted for each word in the dependency tree after the fundamental algorithm has been applied.
further development of the Thai CDG formalism is also expected, in particular for the analysis of sentence-like noun phrases.
neutral
train_92989
(Eisner, 1996) proposed inserting an embedded NP into each PP to solve this problem.
the dependency labels obtainable with absolute certainty in this way are often of the less interesting kind -e.g.
neutral
train_92990
In particular the data have been collected not only from French radio channels, but also from a North-African French-speaking radio channel.
the higher accuracy reached with this type of features is paid at training time with the number of feature function generated in situations like the one reported above, and makes direct CRF model training unfeasible.
neutral
train_92991
0 and GEN signify null postposition and genitive postpositions respectively By distinguishing intra-clausal structures from inter-clausal structures, the 2-Hard setup is using shallower trees and is able to take better global decisions by using more contextual information.
this task comprises of two sub-tasks, a) relative clause identification from the 1 st stage output and b) identifying the head of the relative clause from the matrix clause.
neutral
train_92992
One such case was the absence of morphological features of lexical items in the 2 nd stage for 2-Hard.
the POS category and the lexical item of the elements in Stack and Input buffer are more crucial for 2 nd stage relations than for the 1 st stage specific relations.
neutral
train_92993
The total number of inversion pairs of σ, also called inversion pair cardinality of σ, is a proper measurement of the distance between σ and the identity permutation (1, 2, .
our goal is to recover the correct, or reasonable, word order of the target sentence.
neutral
train_92994
The difference is that the source sentence should be a tree instead of a string and additional syntactic constraints operate.
we can not simply calculate the outside probability of the hierarchical rule using the product of outside probabilities of phrase pair and subphrase pairs.
neutral
train_92995
The fluency can be better addressed by incorporating morphological information such as proper handling of the case markers of the MNEs.
this significant presence of these entities plays an important role in the news corpus.
neutral
train_92996
The RMWEs in Manipuri are classified as: (i) Complete RMWEs, (ii) Partial RMWEs, (iii) Echo RMWEs and (iv) Mimic RMWEs.
the presence of non-NE transliterated entities is lesser and estimated at 9.2%.
neutral
train_92997
when all w f e 1 ,e 2 are 0).
we detail the scheme on the Bi-word models, the Mono-word models can be handled analogously.
neutral
train_92998
This is different for the conditional models, which are easier to handle but where most approaches are based on initializing from single-word based models (Brown et al., 1993;Vogel et al., 1996;Al-Onaizan et al., 1999).
our comparison of projected gradient descent (PGD) and expectation maximization (EM) revealed that EM leads to better alignments, although PGD finds a comparable but slightly higher objective value.
neutral
train_92999
In this paper we much generalize on this work, considering a class of models we term Bi-word models.
the initial probability and the inter-alignment model share the parameters p 0 and p 1 .
neutral