id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_97000
Let q s = q s \ q t and q t = q t \ q s , where \ is the set difference operator.
the final co-occurrence count of two arbitrary terms w i and w j is denoted by N i,j and it is defined as the sum over all query pairs in the session logs, be the sum of co-occurrence counts over all term pairs.
neutral
train_97001
Although approximate similarity search is wellstudied, we are not aware of any non-trivial crosslanguage name search algorithm in the literature.
fortunately, efficient algorithms exist if instead of exact nearest neighbors, we ask for approximate nearest neighbors (Arya et al., 1998).
neutral
train_97002
Note that several English Wikipedia names sometimes get the same score for a query.
most users of the underrepresented languages of the world have no choice but to consult foreign language Wikipedia articles for satisfying their information needs.
neutral
train_97003
Decisions about more sophisticated encoding of nonlinguistic information may thus require more knowledge about children's representations of the world around them We find (1) that it is possible to jointly infer both meanings and a segmentation in a fully unsupervised way and (2) that doing so improves the segmentation performance of our model.
by counting the co-occurrences, we also compute a gold standard probability distribution for the words given the topic, P (w|z, x = 1).
neutral
train_97004
The two tasks can be integrated in a relatively seamless way, since, as we have just formulated them, they have a common objective, that of finding a minimal, consistent set of reusable units.
our model simulates the learning task, taking as input the unsegmented phonemic representation of the speech along with the set of objects in the non-linguistic context as shown in Figure 1 (a), and infers both a segmentation and a word-object mapping as in Figure 1 (b).
neutral
train_97005
Briefly, each utterance has a single topic z j , drawn from the objects in the non-linguistic context O j , and then for each word w ji we first flip a coin x ji to determine if it refers to the topic or not.
precision is the fraction of gold pairs among the sampled set and recall is the fraction of sampled pairs among the gold standard pairs.
neutral
train_97006
We present a low-resource, data-driven, and language-independent approach that uses a hybrid word-and consonant-level conditional Markov model.
the reduction in error rate for our cons-only and hybrid models tends to be lower for DER than WER in all languages except for English.
neutral
train_97007
All language processing applications require input text to be tokenized into words for further processing.
though the merged form is not considered correct diction, it is still frequently used and thus has to be handled.
neutral
train_97008
As has been discussed, space does not necessarily indicate word boundary.
statistics is only used in ranking of segmentations.
neutral
train_97009
These works differ from the one presented here in that we apply online learning techniques to train generative models instead of discriminative models.
following the TransType ideas, Barrachina et al.
neutral
train_97010
This prevents models from doing better or worse just because they received different starting points.
three preference judgments are obtained for each pair of translations and are combined using weighted majority vote.
neutral
train_97011
In the case of recursive grammars, there is no problem with the stickbreaking representation and the order by which we enumerate the nonterminals.
in §4 we give an empirical comparison of the algorithm to MCMC inference and describe a novel application of adaptor grammars to unsupervised dependency parsing.
neutral
train_97012
We follow and extend the idea from Johnson et al.
note that the cost of the grammar preprocessing step is amortized over all experiments with the specific grammar, and the E-step with variational inference can be parallelized, while sampling requires an update of a global set of parameters after each tree update.
neutral
train_97013
4 Sampling Non-Binary Representations We can sample in models without a natural binary representation (e.g., HMMs with with more than two states) by considering random binary slices.
• PTSG: R is the set of grammar symbols, and each θ r is a distribution over labeled tree fragments with root label r. binary variables (to be sampled) z latent structure (set of choices) z −s choices not depending on site s z s:b choices after setting counts (sufficient statistics of z) Table 1: Notation used in this paper.
neutral
train_97014
Stem and character features also contribute to the performance gain.
these expected counts are then normalized in the M-step to re-estimate θ: Normalizing expected counts in this way maximizes the expected complete log likelihood with respect to the current model parameters.
neutral
train_97015
We consider the same generative, locally-normalized models that dominate past work on a range of tasks.
6 As expected, large positive weights are assigned to both the dictionary and edit distance features.
neutral
train_97016
Vybornova and Macq (2007) aimed to embed information by exploiting the linguistic phenomenon of presupposition, with the idea that some presuppositional information can be removed without changing the meaning of a sentence.
let D be the maximum number of sentence boundaries between two subsequent paraphrasable sentences in T. for every D sentences within a cover text T, there will be at least one paraphrasable sentence.
neutral
train_97017
Taking the negative log to convert that probability into a cost function: Finally, we define the cost of inserting a new column into the alignment to be equal to the number of columns in the existing alignment, thereby increasingly penalizing each inserted column until additional columns become prohibitively expensive.
shaw and Hatzivassiloglou found financial text particularly difficult to order, and reported that their performance dropped by 19% when they included nouns as well as adjectives.
neutral
train_97018
Finally (and perhaps most importantly), we expect that our model would benefit from additional training data, and plan to train on a larger, automatically-parsed corpus.
the model only makes predictions for 74.1% of all modifier pairs in the test data, so recall is quite low (see Tables 4 and 6).
neutral
train_97019
Performance did not drop precipitously upon the removal of any particular feature type, indicating a high amount of shared variance among the features.
in this paper, we focus on question generation (QG) for the creation of educational materials for reading practice and assessment.
neutral
train_97020
Some seeds, such as (e), extract large quantity of instances from the very beginning, resulting in fewer bootstrapping iterations, while others, such as (d), spike much later, resulting in more.
the distribution is similar to the seed distribution for the English people and cities patterns.
neutral
train_97021
In recent coarse-grained evaluations, such systems have achieved accuracies of close to 90% (Pradhan et al., 2007;Agirre and Soroa, 2007;Schijvenaars et al., 2005).
for many semantic natural language processing tasks, systems require world knowledge to disambiguate language utterances.
neutral
train_97022
For each query, snippets are collected by parsing the web-pages returned by Yahoo!.
7.20% of these change instances are due to one or more parts of speech changes, and are classified to change class PoS.
neutral
train_97023
7.20% of these change instances are due to one or more parts of speech changes, and are classified to change class PoS.
the most common change class is PL2P.
neutral
train_97024
For these classes, change classes P2LMw, P2L and P2PL, most probably the parser output after the replacement is wrong.
excluding parts of speech from the comparison, there is no other difference between the two parses.
neutral
train_97025
Our approach focuses on English multiword expressions that appear as sequences in text.
change class P2PL (Phrase to Phrases or Leaves) to change class PL2P, after the replacement, the tokens of one phrase either are assigned to more than one phrases or appear as leaves.
neutral
train_97026
Since the baseline parser is different, we didn't make a direct comparison here.
we also compared supertagging results with previous works (reported on section 22).
neutral
train_97027
This indicates that the focus on algorithms that guarantee well-formed trees is justified.
3 Experiments The most common approach for combining independently-trained models at parsing time is to assign each candidate dependency a score based on the number of votes it received from the base parsers.
neutral
train_97028
SITGs have proven to be a powerful tool in Syntax Machine Translation.
3) in which a string in one side is associated with the empty string in the other side through rules that are not lexical rules.
neutral
train_97029
In this case, the template for etree instance #1 will match etree query E1, with the additional information stored that the address 1.1.2 will be used for later processing.
step 1 of the query processing does not actually check this, since it is simply going through each template, without examining any anchors, to determine which have the appropriate structure to match a query.
neutral
train_97030
This suggests that the log probability scores from both parsers are internally consistent, but need to be recalibrated when the parses are combined.
we deliberately added features that incorporated linguistic notions such as head, governor and maximal projection, as the Berkeley parser does not explicitly condition on such information (in contrast to the Brown parser, which does).
neutral
train_97031
We can also compare these results to a prominent pedagogical category, such as scaffolding, that a current coding scheme particularly emphasizes, and examine the differences between the two.
off Topic conversation may seek to motivate the student in more subtle ways.
neutral
train_97032
LSA Average Similarity (lsaavg).
in this paper, we form the foundation for a broader study of this type of data by investigating the basic unit of interaction, referred to as an initiation-response pair (Schegloff, 2007).
neutral
train_97033
Thus, the final dataset consists of pairs of message pairs ((p i , p j ), (p i , p k )), where they have the same reply message p i , and p j is the correct quote message of p i , but p k is not.
note that in this formulation, all words have an equal chance to affect the overall similarity between vectors since it is the angle represented by each word in a pair that comes to play when cosine distance is applied to a word pair.
neutral
train_97034
In order to evaluate the quality of using the explicit reply structure as our gold standard for initiation-response links, we asked human judges to annotate the response structure of a randomselected medium-length discussion (19 posts) where we had removed the meta-data that indicated the initiation-reply structure.
relatively little work has investigated the importance of specifically in-focus connections between initiation-response pairs and utilized them as clues for the task.
neutral
train_97035
Students were grouped so that no two members of the same team sat next to each other during the lab, to ensure all communication was recorded.
these agents are often ignored and abused in collaborative learning scenarios involving multiple students.
neutral
train_97036
For example, Table 1 below contains a sample of automatically produced summaries for some recently trending topics on Twitter.
while the majority of these tweets are pointless babble or conversational, approximately 3.6% of these posts are topics of mainstream news (Pear Analytics, 2009).
neutral
train_97037
For this new graph, the only input sentence that contains this root phrase would be sentence 1.
our method can automatically summarize a collection of microblogging posts that are all related to a topic into a short, one-line summary.
neutral
train_97038
The Phrase Reinforcement algorithm begins with a starting phrase.
stop words are given a weight of zero while remaining words are given weights that are both proportional to their count and penalized the farther they are from the root node: In the above equation, the RootDistance of a node is simply the number of hops to get from the node to the root node and the logarithm base, b, is a parameter to the algorithm.
neutral
train_97039
(2006) use logistic regression to extract keywords from web pages for content-targeted advertising, which has the most similar application to our work.
in this work, we use only isolated words as user tags, however, "google", "wave", and "palo", "alto" extracted in this example indicate that phrase level tagging can bring us more information about the user, which is typical of many users.
neutral
train_97040
One reason that they do better is because of the smaller number of classes.
the accuracy of the n-gram approach strongly depends on the length of the texts.
neutral
train_97041
As all of the other features in the DIRECTL framework are indicators, the training algorithm may have trouble scaling an informative real-valued feature.
the linearchain features conjoin context and transition features.
neutral
train_97042
The FLM training data consists of 206 Million running full-words.
the main idea of the model is to backoff to other factors when some word n-gram is not observed in the training data, thus improving the probability estimates.
neutral
train_97043
Thus, the result for the whole word approach is very close to the result obtained by using gold standard segmentation (94.91%).
previous approaches (Diab et al., 2004;Habash and Rambow, 2005;van den Bosch et al., 2007;AlGahtani et al., 2009) chose the segmentation approach but concentrated on pOS tagging by using the segmentation provided by the ATB.
neutral
train_97044
Mohammed, John), nominal (city, president) or pronominal (e.g.
for other languages, such as Chinese, character is considered as the adequate unit of analysis (Jing et al., 2003).
neutral
train_97045
Because the length of the context varies throughout the dictionary, fixed-length contexts may overfit some words, or inaccurately model others.
for each (g, p) ∈ A, estimate the change in description length of the lexicon if (g, p) is added.
neutral
train_97046
Ananthakrishnan and Narayanan (2008) used RFC (Taylor, 1994) and Tilt (Taylor, 2000) parameters along with word and part of speech language modeling to classify pitch accents as H*, !H*, L+H* or L*.
on all corpora, the classification accuracy is improved, with statistically insignificant (p > 0.05) reductions in CER.
neutral
train_97047
In both systems, the frame-based log posterior vector of P (phone|acoustics) over all phones is decorrelated using the Karhunen-Loeve (KL) transform; unlike MLPs, CRFs take into account the entire label sequence when computing local posteriors.
our second set of experiments are based on the following observation regarding the CRF posteriors.
neutral
train_97048
The predictionŷ U is used as the ground truth for the unlabeled data.
the problem of requiring a complete inference iteration before parameters are updated also exists in the semi-supervised learning scenario.
neutral
train_97049
Moreover, the attachment (jumped,with), while correct, receives a negative score for the bare preposition "with" (Fig.
we define structural, unigram, bigram and ppattachment features.
neutral
train_97050
However, these changes are limited to a fixed local context around the attachment point of the action.
for example, consider the attachments (brown,fox) and (joy,with) in figure (1.1).
neutral
train_97051
We must, therefore, learn how to order the decisions.
the model is not explicitly trained to optimize attachment ordering, has an O(n 2 ) runtime complexity, and produces results which are inferior to current single-pass shift-reduce parsers.
neutral
train_97052
At iteration i, there are n − i locations to choose from, and a naive computation of the argmax is O(n), resulting in an O(n 2 ) algorithm.
it is still a greedy, best-first algorithm leading to an efficient implementation.
neutral
train_97053
We jettison long, complex sentences and deploy Ad-Hoc * 's initializer and batch training at WSJk * -an estimate of the sweet spot data gradation.
figures 4 and 5 tell one part of our story.
neutral
train_97054
Our approach is inspired by earlier work on relaxation algorithms for performing MAP inference by incrementally tightening relaxations of a graphical model (Anguelov et al., 2004;Riedel, 2008), weighted Finite State Machine (Tromble and Eisner, 2006), Integer Linear Program (Riedel and Clarke, 2006) or Marginal Polytope (Sontag et al., 2008).
as a measure of accuracy for marginal probabilities we find the average error in marginal probability for the variables of a sentence.
neutral
train_97055
The two most prominent approaches to marginal inference in general graphical models are Markov Chain Monte Carlo (MCMC) and variational methods.
on close inspection often many of the additional factors we use to model some higher order interactions are somewhat unnecessary or redundant.
neutral
train_97056
We may apply the factorization operation repeatedly until all rules have rank 2; we refer to the resulting grammar as a binarization of the original LCFRS.
we simply use c(p) rather than ϕ(p) as the score for new productions, controlling both which binarizations we prefer and the order in which they are explored.
neutral
train_97057
It is not immediately apparent whether, in order to find a binarization of minimal parsing complexity, it is sufficient to consider only binarizations of minimal fan-out.
the fan-out of a production, ϕ(p) is the fan-out of its lefthand side, ϕ(A).
neutral
train_97058
We are grateful to Joakim Nivre for assistance with the Swedish treebank.
each distinct endpoint in the production is counted exactly once by eq.
neutral
train_97059
Our qualitative experiments indicate that the web derived lexicon can include a wide range of phrases that have not been available to previous systems, most notably spelling variations, slang, vulgarity, and multiword expressions.
if a node has multiple paths to a seed, it should be reflected in a higher score.
neutral
train_97060
The word sequence was converted to a phrase sequence as follows, by applying rules which combine two adjacent words: ,--,.,:,POS,-RRB-,-RSB-,-RCB-} P P ≡ {IN,RP,TO,DT,PDT,PRP,WDT,WP,WP$,WRB} N N ≡ {CD,FW,NN,NNP,NNPS,NNS,SYM,JJ} do then Combine the words vi and vi+1 until No rules are applied We constructed a Japanese polarity reversing word dictionary from the Automatically Constructed Polarity-tagged corpus (Kaji and Kitsuregawa, 2006).
in sentiment classification, a sentence which contains positive (or negative) polarity words does not necessarily have the same polarity as a whole, and we need to consider interactions between words instead of handling words independently.
neutral
train_97061
Conditional random fields with hidden variables have been studied so far for other tasks.
let us consider how to infer the sentiment polarity p ∈ {+1, −1}, given a subjective sentence w and its dependency tree h. The polarity of the root node (s 0 ) is regarded as the polarity of the whole sentence, and p can be calculated as follows: the polarity of the subjective sentence is obtained as the marginal probability of the root node polarity, by summing the probabilities for all the possible configurations of hidden variables.
neutral
train_97062
Kernels were combined using plain summation.
all experiments were done with the SVM-Light-TK toolkit 7 .
neutral
train_97063
The overall performance of the tree kernels shows that they are much more expressive than sequence kernels.
if VK is added to the best TKs, the best SKs, or both, a slight increase in F-Score is achieved.
neutral
train_97064
they can be opinion words (see Sentences 1-3), communication words, such as maintained in Sentence 2, or other lexical cues, such as according in Sentence 3.
all sources used for this type of generalization are known to be predictive for opinion holder classification (Choi et al., 2005;Kim and Hovy, 2005;Choi et al., 2006;Kim and Hovy, 2006;Bloom et al., 2007).
neutral
train_97065
Titov and McDonald (2008b) underline the need for unsupervised methods for aspect detection.
finally, as online reviews belong to an informal genre, with inventive spelling and specialized jargon, it may be insufficient, for both aspect and sentiment, to rely only on lexicons.
neutral
train_97066
10 Both trigger and argument-edge detections leave much room for improvement.
this makes it very difficult to predict that "gp41" is the cause of "up-regulation", and that "up-regulation" is the theme of "involvement".
neutral
train_97067
Model Parameters For each image we extracted 150 (on average) SIFT features.
since they do not analyze the actual content of the images, search engines cannot be used to retrieve pictures from unannotated collections.
neutral
train_97068
The wizard had four options: make a firm choice of a candidate, make a tentative choice, ask a question, or give up to end the title cycle.
although analysis of individual wizards has not been systematic in other work, we consider the variation in human performance significant.
neutral
train_97069
4 These are generated by taking a source language parse tree and 'expanding' each node so that it rewrites with different permutations of its children.
this reordering can happen in two ways, which we depict in Figure 1.
neutral
train_97070
Finally, although they use a lexicalized re-ordering model, no details are given about the baseline distortion cost model.
monotonic decoding no longer gives the least costly translation path, thus complicating future cost estimation.
neutral
train_97071
However, the binarization of Figure 4 allows the factorization into S(U , NP ) ↔ S(U , NP ) and U : @(NP , V ) ↔ @(V , NP ), which are fully binarized productions.
the construction is generally known as BAR-HILLEL construction [see Bar-Hillel et al.
neutral
train_97072
Four judges (graduate students) were used.
reasonable machine-generated sentence compressions can often be obtained by only removing words.
neutral
train_97073
We then use this pdf to calculate the percentile rank of extractive summarization systems.
we use 1, 000 equally spaced bins between 0 and 1.
neutral
train_97074
By looking at the pdf plots and the minimum and maximum columns from Table 2, we notice that for 1 0 5 10 15 20 25 30 35 40 45 50 "Legal-50-Ensemble-ROUGE-1.dat" all the domains, the pdfs are long-tailed distributions.
comparing Table 2 with the values in Table 1, we also notice that the compression ratio affects the performance differently for each domain.
neutral
train_97075
There are two reasons for this: First, as we mentioned earlier, most of the summary space consists of easy extracts, which make the distribution long-tailed.
for the legal and scientific domains, we use the given section boundaries (without considering the subsections for scientific documents).
neutral
train_97076
The budgeted constraint arises naturally since often the summary must be length limited as mentioned above.
also, the number of documents to be summarized can vary from one to many.
neutral
train_97077
For instance, the subjects of the English verb to shoot are generally people, while the direct objects can be people or animals.
it is risky to rely on the single nearest neighbor -it might simply be wrong.
neutral
train_97078
Syntactic functions occurring more than 1, 000 times in the gold standard are shown in Table 1 (for more details we refer the interested reader to Surdeanu et al.
a mechanism for inducing the semantic roles observed in the data without additional manual effort would enhance the robustness of existing SRL systems and enable their portability to languages for which annotations are unavailable or sparse.
neutral
train_97079
Our model departs from the traditional SRL literature by modeling the argument identification problem in a single stage, rather than first classifying token spans as arguments and then labeling them.
furthermore, only temporal, locative, and directional senses of prepositions evoke frames.
neutral
train_97080
The empirical adequacy of 2-SCFG models would presumably be lower with automatically-aligned texts and if the study also included non-European languages.
on MT06, 53% of the translated sentences produced by our best system use at least one source-discontinuous phrase, and 9% of them exploit one or more target-discontinuous phrases.
neutral
train_97081
Additionally, the experiments were performed over a large corpus of messages that are not available for use by other researchers.
confounding the issue further is that users are able to configure their email client to suit their individual tastes, and can change both the syntax of quoting and their quoting style (top, bottom or inline replying) on a per message basis.
neutral
train_97082
4, the Choi data segment lengths are well-defined by their mean, because they were constructed with uniform distributions of segment length.
furthermore, in sampling hypothesized boundaries to match the number of reference boundaries, the hierarchical conception of the error metric smoothly adapts to segmentations that overestimate or underestimate the number of segments.
neutral
train_97083
A boundary that is j < k atoms from the beginning or end of the text has weight j k relative to boundaries in the middle of the text.
they perform much worse than they did on the Choi data.
neutral
train_97084
The manual reformulation is formulaic, and it is part of our broader research effort to automate the process using transfer rules and a bi-directional grammar.
rather than giving participants a fixed scale (e.g.
neutral
train_97085
We now summarise our main research questions: 1.
it has not been investigated whether readers with different levels of domain expertise are facilitated by any specific lexico-syntactic formulation among the many possible explicit realisations of a relation.
neutral
train_97086
MacCartney and Manning (2008) used an inference procedure based on Natural Logic, leading to a relatively high-precision, low-recall system.
to account for the grandparent-child relationship in the hypothesis, TED would produce a fairly long sequence, relabeling nearby to be near, deleting the two nodes for Rossville Blvd, and then reinserting those nodes under near.
neutral
train_97087
The distributional semantics is captured by means of LSA: we used the java Latent Semantic Indexing (jLSI) tool (Giuliano, 2007).
in case of Wordnet, the validity of the kernel will depend of the kind of similarity used.
neutral
train_97088
There are several solutions for taking this information into account, e.g.
maxSSTK+WOK improves WOK on all datasets thanks to its generalization ability.
neutral
train_97089
According to Conceptual Metaphor Theory (Lakoff and Johnson, 1980) metaphor can be viewed as an analogy between two distinct domains -the target and the source.
we did not evaluate the WSD of the paraphrases at this stage.
neutral
train_97090
This error is likely to have been triggered by the feature similarity component, whereby one of the senses of lift would stem from the same node in WordNet as the metaphorical sense of mount.
termination is the shared element of the metaphorical verb and its literal interpretation.
neutral
train_97091
I then performed LDA via Mallet (McCallum, 2002) to estimate the narrative mixture components of each passage.
given the (estimated) narrative mixtures for each passage as an input, to which (if any) narrative threads ought this passage be assigned?
neutral
train_97092
A high number is negative in that it is the sign of an inefficient dialogue, one which takes many turn exchanges to accomplish the objective.
in this paper we describe a series of experiments investigating the relationship between objective acoustic/prosodic dimensions of entrainment and manually-annotated perception of a set of social variables designed to capture important aspects of conversational partners' social behaviors.
neutral
train_97093
To test whether CRF m+R 's relatively high performance was due to chance, we computed 99% confidence intervals for the differences in F 1 score between CRF m+R and each of the other methods.
we use "Rules" to refer to this method.
neutral
train_97094
From this sample, we computed lists of n-grams (n = 1, 2, .
our approach distinguishes the sequence "The argument states that .
neutral
train_97095
BP is an inference technique that can be used to efficiently approximate posterior marginals on variables in a graphical model; here the marginals of interest are the phrase pair posteriors.
the extraction heuristic identifies many overlapping options, and achieves high coverage.
neutral
train_97096
The lower part of the table provides the same comparison, but for the variation of the word factored TM.
future work will thus aim at introducing them into conventional phrase-based systems, such as Moses .
neutral
train_97097
One practical solution is to restrict the output vocabulary to a short-list composed of the most frequent words (Schwenk, 2007).
this work was partially funded by the French State agency for innovation (OSEO), in the Quaero Programme.
neutral
train_97098
32.1 26.2 + the combination of both 33.2 27.5 32.7 26.8 + the combination of both 33.4 27.2 , then its distortion counterpart P s k i |h n−1 (t 1 i ), h n−1 (s k i ) and finally their combination, which yields the joint probability of the sentence pair.
the usual size of the short-list is under 20k, which does not seem sufficient to faithfully represent the translation models of section 2.
neutral
train_97099
When co-referent text mentions appear in different languages, these techniques cannot be easily applied.
we learn the parameters λ using a quasi-Newton procedure with L 1 (lasso) regularization (Andrew and Gao, 2007).
neutral