id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_101000
Similarly, in the Science domain, the French word 'finis,' when it should have been translated as 'finite,' was translated incorrectly due to a sense error 27 times.
in contrast to that work, we do not directly align the hypothesis and reference translations but, rather, pivot through the source text.
neutral
train_101001
For an evaluation of these topics see Section 4.3.1.
given the top M words ) for a topic t, the coherence of that topic can be calculated with the following formula: , where D(v) is the number of documents containing v and D(v, v ) is the number of documents containing both v and v .
neutral
train_101002
Although labels have a one-to-one correspondence with languages, the label distribution does not actually correspond directly to the language proportion, because the distribution estimates the proportion of byte n-gram sequences associated with a label and not the proportion of bytes directly.
we propose a method that concurrently detects that a document is multilingual, and estimates the proportion of the document that is written in each language.
neutral
train_101003
17 Code from (Burkett et al., 2010) is obtained through personal communications.
in a multilingual setting, coming up with effective constraints require extensive knowledge of the foreign 1 language.
neutral
train_101004
We distinguish the two senses in a preprocessing step: if the other captions of the same image do not mention children, but refer to teenaged or adult women, we assign girl the woman-sense.
this variety of descriptions associated with the same image is what allows us to induce denotational similari-ties between expressions that are not trivially related by syntactic rewrite rules.
neutral
train_101005
This suggests that learning this pattern correctly requires considerably more input than for the other patterns.
by combining different kinds of cues, e.g.
neutral
train_101006
The sole exception in this respect is Doyle and Levy (2013) who added stress cues to the Bigram model , demonstrating that this leads to an improvement in segmentation performance.
we find that phonotactic cues to word-boundaries interact with stress cues, indicating synergistic effects for small inputs and partial redundancy for larger inputs.
neutral
train_101007
If together with its neighbors it can match the selected label, the assignment is finalized.
results: Table 4 compares the results for two baseline systems-standard EM (method 1), and a previously reported system using model minimization (method 2) for the same task.
neutral
train_101008
We propose a new method for unsupervised tagging that finds minimal models which are then further improved by Expectation Maximization training.
this data was created by semi-automatically converting the Penn treebank to CCG derivations (Hockenmaier and Steedman, 2007).
neutral
train_101009
• We show how to efficiently parallelize the algorithm while preserving approximation guarantees.
previous work (Ravi et al., 2010a;Ravi et al., 2010b) recognized this challenge and employed two phase heuristic approaches.
neutral
train_101010
We also write r(σ[i]) to denote the root of t Informally, the LR-spine parser uses the same transition typologies as the arc-standard parser.
for the arc-standard parser, the computation takes cubic time in the size of the largest of the left and right input stacks.
neutral
train_101011
The path from t's root to node σ L [1] is called the spine of t. • Every node of t not in the spine is a dependent of some node in the spine.
our novel dynamic oracle algorithms rely on arguably more complex structural properties of computations, which are computed through dynamic programming.
neutral
train_101012
This could not be applied to the other two disfluency detectors, so we cannot test those differences for significance.
speakers tend be disfluent in bursts: if the previous word is disfluent, the next word is more likely to be disfluent.
neutral
train_101013
Deriving patient history timelines from clinical notes also involves these types of assumptions, but there are special demands imposed by the characteristics of the clinical narrative.
similarly, mentions of symptoms or disorders reflect occurrences in a patient's life, rather than abstract entities.
neutral
train_101014
For unguided, general-purpose annotation, the number of relations that could be annotated grows quadratically with the number of events and times, and the task quickly becomes unmanageable.
based on the known progression-of-care for colon cancer, we can infer that the colonoscopy occurs first, biopsies occur during the colonoscopy, pathology happens afterwards, a diagnosis (here, adenocarcinoma) is returned after pathology, and resection of the tumor occurs after diagnosis.
neutral
train_101015
eMRG with both Binary and Ternary Edges: If there are more than one entity mentions and at least one attribute mention in a sentence, an eMRG can potentially have both binary and ternary edges.
if not, the sentence is marked as off-topic.
neutral
train_101016
In practice, however, differ-ent types of sentiment-oriented relations frequently coexist in documents.
a mention-based relation graph (or MRG ) represents a collection of mention-based relation instances expressed in a sentence.
neutral
train_101017
An evidentiary mention-based relation graph, coined eMRG , extends an MRG by associating each edge with a textual evidence to support the corresponding relation assertions (see Figure 2).
since instance sets of sentiment-oriented relations (ssoRs) are the expected outputs, we can obtain ssoRs from MRGs by using a simple rule-based algorithm.
neutral
train_101018
Then (y, h) corresponds to an eMRG, and (a, c) ∈ (y, h) is a labeled edge a attached with a textual evidence c. Given a labeled dataset D = {(x 1 , y 1 ), ..., (x n , y n )} ∈ (X × Y) n , we aim to learn a discriminant function f : X → Y × H that outputs the optimal eMRG (y, h) ∈ Y(x) × H(x) for a given sentence x.
these methods fall short of extracting comparative relations based on domain dependent information.
neutral
train_101019
week") and a linear interpolation of "base all" and "base one" ("int one all").
algorithmically, our work comes closest to the online dynamic topic model of Iwata et al.
neutral
train_101020
Additionally, we also compare with a base model whose counts decay exponentially ("base exp").
our experiments show that we are still able to learn reasonable parameter estimates by optimizingB.
neutral
train_101021
1 We point out that our definition is unconstrained in terms of what to link, i.e., unlike Wikification and WSD, we can link overlapping fragments of text.
• The Senseval-3 dataset for English all-words WSD (Snyder and Palmer, 2004), which contains 899 nouns to be disambiguated using WordNet.
neutral
train_101022
The results obtained by UKB show that the high performance of our unified approach to EL and WSD is not just a mere artifact of the use of a rich multilingual semantic network, that is, Ba-belNet.
the aim of EL is to discover mentions of entities within a text and to link them to the most suitable entry in a reference knowledge base.
neutral
train_101023
Notwithstanding these differences, the tasks are similar in nature, in that they both involve the disambiguation of textual fragments according to a reference inventory.
but while the two tasks are pretty similar, they differ in a fundamental respect: in EL the textual mention can be linked to a named entity which may or may not contain the exact mention, while in WSD there is a perfect match between the word form (better, its lemma) and a suitable word sense.
neutral
train_101024
On Wikipedia our results range between 71.6% (French) and 87.4% F1 (English), i.e., more than 10 points higher than the current state of the art (UMCC-DLSI) in all 5 languages.
we show that the semantic network structure can be leveraged to obtain state-of-the-art performance by synergistically disambiguating both word senses and named entities at the same time.
neutral
train_101025
The only knowledge we use is a simple translation lexicon, that is, a list of translation pairs without translation probabilities, as shown in Figure 2.
our Croatian evaluation is a synonym choice task parallel to Exp.
neutral
train_101026
It is proposed that the temporal model on social media needs to account for two factors: imitation and recency (Leskovec et al., 2009).
for evaluation, we compute the f1 score over the test set, as defined earlier (Guo et al., 2013).
neutral
train_101027
Suppose that a marketing firm is interested in the sentiment about some product on Twitter.
next, we examine how these errors can be recovered in +T+L.
neutral
train_101028
For the binning approach, we tune the bin size on the development set, ranging from 10 minutes to 1 day for time, and from 10 × 10 sqkm to 1500 × 1500 sqkm for location.
retrieving tweets via keyword queries inevitably mixes different entities.
neutral
train_101029
The TempEval competitions (Verhagen et al., 2007;Verhagen et al., 2010;UzZaman et al., 2013b) aimed to improve coverage by annotating relations between all events and times in the same sentence.
the set of temporal relations are a subset of the 13 original Allen relations (Allen, 1983).
neutral
train_101030
This can be seen in the performance of the ClearTK-TimeML system, which achieved the top performance (36.26) on the sparse relation task of TempEval 2013, but performed dramatically worse (15.8) on our dense relation task.
only a small subset of possible pairs are labeled.
neutral
train_101031
While linguistic information overall effectively reflects the meaning of all concept types, we show that features encoding syntactic patterns are only valuable for the acquisition of abstract concepts.
in future work we aim to extend our experiments to concept types such as adjectives and adverbs, and to develop models that further improve the propagation and combination of extra-linguistic input.
neutral
train_101032
8, which shows all the values at 15% evidence ratio.
for a Wikipedia editor to become an administrator, a request for adminship (RfA) 3 must be submitted, either by the candidate or by another community member.
neutral
train_101033
(Note that at most one of the two sum factors of Eq.
assume we want to predict the sign of the edge (u, v) in the LOO setting, and that u, v, and w form a triangle.
neutral
train_101034
It seems hard to improve by much on a sentiment model that achieves an AUC/ROC of 0.88 on its own; the Wikipedia corpus offers an exceptionally explicit linguistic signal.
we sketch a proof of this theorem in Appendix A: The objective function of Eq.
neutral
train_101035
HITs were automatically approved after fifteen minutes.
then By normalizing (and rounding), 11 Although the majority vote on i is for category 1, the estimated probability that the category is 1 is only 0.11, given the adjustments for annotators' accuracies and biases.
neutral
train_101036
Each item has sense labels from up to twenty-five different annotators, collected through crowdsourcing.
we conclude that a modelbased label derived from many Turkers is preferable to a label from a single trained annotator.
neutral
train_101037
The MSM2013 corpus has 4 types of named entities, person (PER), location (LOC), organization (ORG), and miscellaneous (MISC).
qu and Liu (2012) used the pairwise CRF with a 4-connected neighborhood system (2D CRF) as their graphical model, where each vertex in the graph represents a sentence pair, and each edge connects adjacent source sentences or target sentences.
neutral
train_101038
All the 4 taggers are trained using linear chain CRFs with perceptron training.
they could not be handled efficiently by belief propagation for spanning trees.
neutral
train_101039
ee Congruence Constraints: To ensure that ch CKY cell has at most one symbol we require e also require that here R h = {r 2 R : r = h !
6 The number of positions is equal to the number of p types k=0$ two variables have been discussed by Clarke and pata (2008).
neutral
train_101040
(dissonance(due( to(generaliza&on(error( Completely(wrong( Extraneous(informa&on( Vision(detec&on(error( Human:"A"delighGul"clock" in"the"town"centre"of"St" Helier"with"the"iconic"Jersey" cow"at"the"base."
pruning Case (2)/(3): Deletion of the left/right child respectively.
neutral
train_101041
For example, S i = {NN,NP, ...} if p i corresponds to an "object" (noun-phrase).
note that some test instances include rules that we have not observed during training.
neutral
train_101042
To give a specific example, their operator literal to constant is equivalent to having named entities for larger text chunks in our case.
the denotation 1 of the NL query graph is {AUSTIN}.
neutral
train_101043
As seen in these examples, locally non-linear models (Eq.
this technique is sometimes referred to as binning and is closely related to quantization.
neutral
train_101044
We introduce some additional terminology and notation that we will use in the proofs.
the set of derivation trees of a CCG can be formally defined as in Figure 2.
neutral
train_101045
This sharing is the key to the polynomial runtime.
this idea actually underlies several parsing algorithms for equivalent mildly context-sensitive formalisms, such as tree-Adjoining Grammar (Joshi and Schabes, 1997).
neutral
train_101046
The relative proliferation of inference rules, combined with the increase in their complexity, makes, in our own opinion, the specification of the V&W parser more difficult to understand and implement, and calls for a more articulated correctness proof.
for every degree d 0 there are 2 d forward rules and 2 d backward rules.
neutral
train_101047
Preposition features are active if the NP is immediately preceded by a preposition.
table 14 compares models trained on native and learner data in their best configurations based on the training data.
neutral
train_101048
The Illinois system improves its F1 from 31.20 to 42.14 on revised annotations.
it does this by generating additional artificial errors using the error distribution from the training set.
neutral
train_101049
Practical systems, however, should be tuned for good precision to guarantee that the overall quality of the text does not go down.
when a very large native training set such as the Web1T corpus is available, it is often advantageous to use it.
neutral
train_101050
We present MULTIP (Multi-instance Learning Paraphrase Model), a new model suited to identify paraphrases within the short messages on Twitter.
for example, the word pair "next new" in two tweets about a new player Manti Te'o to a famous former American football player Junior Seau: • Manti bout to be the next Junior Seau • Teo is the little new Junior Seau further note that not every word pair of similar meaning indicates sentence-level paraphrase.
neutral
train_101051
For example, the word pair "next new" in two tweets about a new player Manti Te'o to a famous former American football player Junior Seau: • Manti bout to be the next Junior Seau • Teo is the little new Junior Seau Further note that not every word pair of similar meaning indicates sentence-level paraphrase.
y i is then set by aggregating all z i through the deterministic-or operation.
neutral
train_101052
The additional images provide two important benefits: (1) an increasing degree of challenge to keep the player's interest, (2) more image interactions to use in producing the annotation.
in essence, accurate players with high α are more likely to be shown mystery gates that annotate pictures from U , whereas completely inaccurate players are prevented from adding new annotations.
neutral
train_101053
The second type of gate, referred to as a mystery gate, shows three images that are potentially related to the clue.
as with Puzzle Racer, the Ka-boom!
neutral
train_101054
continuously revises the annotation during gameplay based on which pictures players spare, the first analysis assesses how the accuracy changes with respect to the length of one Ka-boom!
puzzle Racer reduces the annotation cost to ≤27% of that required by crowdsourcing.
neutral
train_101055
Each flight contains one randomly-selected picture for each of a word's n senses and n distractor images.
in our first future work, we plan to develop new types of video games for textual items as well as extend the current games for new semantic tasks such as selectional preferences and frame annotation.
neutral
train_101056
Probabilistic modeling is a useful tool in understanding unstructured data or data where the structure is latent, like language.
vARIATIONAL runs fifty iterations, with the same truncation level as in ONLINE.
neutral
train_101057
In this case, an NER system based on surface statistics alone would likely predict that Freddie Mac is a PERSON.
table 3 shows the results of adding each interaction factor in turn to the baseline and removing each of the three interaction factors from the full joint model (see Figure 4).
neutral
train_101058
By Theorem 3, it is ISL.
5 Its state merging criterion is based on the defining property of ISL functions: two input strings with the same suffix of length (k − 1) have the same tails.
neutral
train_101059
A function f is ISL iff there is some k such that f can be described with a SFST for which This theorem helps make clear how ISL functions are Markovian: the output for input symbol a depends on the last (k − 1) input symbols.
each element of the left projection of S is of length (k + 1) and each element of its right projection is at most of length (k + 1) the size of the characteristic sample is in , which is clearly polynomial in the size of the target transducer.
neutral
train_101060
Using a semi-Markov CRF, we model the conditional distribution over all possible opinion segmentations given the input x: where θ denotes the model parameters, y and f denotes a feature function that encodes the potentials of the boundaries for opinion segments and the potentials of transitions between two consecutive labeled segments.
semi-Markov CRFs (Sarawagi and Cohen, 2004) (henceforth semi-CRF) have been shown more appropriate for the task than CRFs since they allow contiguous spans in the input sequence (e.g.
neutral
train_101061
16 Another source for phonetic problems occurs, if the prefix boundary splits the word in a way that leads to another pronunciation pattern compared to the solution word as in this example.
note that the automatic prediction is compared with the actual error rates, not the human predicated ones.
neutral
train_101062
We check if the same solution also occurs in another gap to account for repetition.
our model has been developed for the difficulty prediction of English C-tests.
neutral
train_101063
In addition, an expert policy π must be specified which is an oracle that returns the optimal action for the instances in S, akin to an expert demonstrating the task.
• Utterances that are within the scope of the system but too complex to be represented by the proposed MRL, e.g.
neutral
train_101064
For these reasons, we converted the MR expressions into a node-argument form.
3 gives results on the test set.
neutral
train_101065
The datasets developed in the recent dialog state tracking challenge (Henderson et al., 2014) also consist of dialogs between a user and a tourism information system.
3 are not meaningful since fewer dialogs are being used for training and testing in the cross-scenario setup.
neutral
train_101066
The loss function in DAGGER is only used to compare complete outputs against the gold standard.
since π rand is progressively ignored, the effect of such actions is reduced.
neutral
train_101067
spaghettibutter) from the Wall Street Journal test set are unseen in training.
the best variants use different composition matrices based on the distance of the candidate head from the PP (HPCD, HPCDN).
neutral
train_101068
(2008) demonstrate that using WordNet semantic classes benefits PP attachment performance.
this can be integrated in the composition architectures in the following way: for each candidate head, represented by a vector h ∈ R n , concatenate a vector representing the word following the candidate.
neutral
train_101069
This paper takes some key steps towards facilitating reasoning about quantities expressed in natural language.
for our classifiers, we use a sparse averaged perceptron implemented with the SNOW framework (Carlson et al., 1999).
neutral
train_101070
The importance of reasoning about quantities has been recognized and studied from multiple perspectives.
the rules were specific to the queries used, and do not extend well to unrestricted English.
neutral
train_101071
An important problem in this context is cross-document coreference resolution (CCR): computing equivalence classes of textual mentions denoting the same entity, within and across documents.
all mentions pertaining to John Smith within a document refer to the same person.
neutral
train_101072
Probabilistic graphical models like Markov Logic networks (Richardson & Domingos, 2006;Domingos et al., 2007;Domingos & Lowd, 2009) or factor graphs (Loeliger, 2008;Koller & Friedman, 2009) take into consideration constraints such as transitivity, while spectral clustering methods (Luxburg, 2007) implicitly consider transitivity in the underlying eigenspace decomposition, but suffer from high computational complexity.
this is also done for the cooccurring mention groups, leading to the extended scope of the original mention group considered.
neutral
train_101073
BIC is a Bayesian variant of the Minimum Description Length (MDL) principle (Grunwald, 2007), assuming the points in a cluster to be Gaussian distributed.
simpler methods also perform fairly well.
neutral
train_101074
Finally, our third single system baseline is the model of Toutanova et al.
similar to Tromble and Eisner (2006), for all algorithms, we first use the local solution without constraints and only apply the constraints in the case of a violation.
neutral
train_101075
Another popular approach has been to apply a reranking model, which can incorporate soft structural constraints in the form of features, on top of the k-best output of local classifiers (Toutanova et al., 2008;Johansson and Nugues, 2008).
since there are no overlaps, we will not need to include incompatible edges.
neutral
train_101076
(2013) report an accuracy of 92.8% on the test set; this represents almost a 30% relative error reduction.
second, even if we restrict the constraints to core reference roles, the lack of ordering between the spans in the constraint means that we would have to represent all subsets of these constraints are rarely violated in practice.
neutral
train_101077
This allows word distributions to depend on both factors.
our experiments focused on understanding how SPRITE compares to commonly used models with similar structures, and how the different variants compare under different metrics.
neutral
train_101078
We trained 100-dimensional skip-gram vectors (Mikolov et al., 2013) on English Wikipedia (tokenized/lowercased, resulting in 1.8B tokens of text) using window size 10, hierarchical softmax, and no downsampling.
if the current word is a local context word, we sample a new topic/sense pair for it again conditioned on all other latent variable values.
neutral
train_101079
We perform inference using collapsed Gibbs sampling ( §4.2), then estimate the sense distribution for each instance as the solution to the WSI task.
in Table 2, we present results for these systems and compare them to our basic (i.e., without any data enrichment) sense-topic model with S = 3 (row 9).
neutral
train_101080
Compared with LDA with full context (FULL) in row 6, performance is slightly improved, perhaps due to the fact that longer contexts induce more accurate topics.
their assumption is that words with similar distributions have similar meanings.
neutral
train_101081
Length We also investigated whether solution length is a predictor of complexity (e.g., simple solutions may vary in length and amount of detail from complex ones).
τ ranges between −1 and +1, where +1 indicates equivalent rankings, −1 completely reverse rankings, and 0 independent rankings.
neutral
train_101082
The Position and Permutation performed significantly better (p < 0.05) compared to Random, Length and SynSem baselines.
we perform Laplace smoothing to avoid zero probability transitions between states: This HMM formulation allows us to use efficient dynamic programming to compute the likelihood of a sequence of solutions.
neutral
train_101083
The sampling sequence starts with a random initialization to the hidden variables.
we can compute the expected complexity of the solution set for problem i, using the inferred distribution over levels Table 5 shows the complexity of different problems as predicted by the position model (with 10 levels).
neutral
train_101084
In our current design, the CPU portions are those that cannot be easily parallelized on the GPU or those that require too much memory to fit on the GPU.
gPUs are poor at "irregular" computations that involve conditionals, pointer manipulation, and complex execution sequences.
neutral
train_101085
3 Munroe summarizes his results with 954 idealized colors-RGB values that best exemplify high frequency color labels.
the graphical model in Figure 4 formalizes our approach.
neutral
train_101086
The most likely label of this posterior is the maximum likelihood estimate (MLE).
similar histogram models have been developed by Chuang et al.
neutral
train_101087
In any utterance, yellowish green fits only those Hue values that are above a minimum threshold Ï„ Lower and below a maximum threshold Ï„ Upper .
the model describes this variability with probability density functions.
neutral
train_101088
Similar to the diverse condition in Table 1, it is important that the extractor can correctly predict on diverse sentences that are dissimilar to each other.
it allows them to scale but makes them unable to output canonicalized relations.
neutral
train_101089
We demonstrate the potential for web links to both complement and completely replace Wikipedia derived data in entity linking.
for example, we observe links to Apple the fruit where the surrounding context indicates an intention to link Apple Inc instead.
neutral
train_101090
We define name probability as the conditional probability of a name referring to an entity: where M n,e is the set of mentions with name n that refer to entity e and M n, * is all mentions with name n. We use existing conditional probability estimates from the DBpedia Lexicalizations data set (Mendes et al., 2012).
we also present detailed experiments comparing popularity, context, and coherence components across settings.
neutral
train_101091
Wikipedia models generally perform better.
references to the inventor appear with words like 'engineer', 'ac', 'electrical'.
neutral
train_101092
But it is not well-equipped to capture semantic relatedness at the word level.
as is common in unsupervised segmentation (Poon et al., 2009;Sirts and Goldwater, 2013), we included the test words (without their segmentations) with the training words during parameter learning.
neutral
train_101093
The prediction of the dependency topology and labels through P l means that the full RDLM has the highest perplexity of all models.
for unknown words, we back-off to a special unk token for the sequence models and P l , and to the pre-terminal symbol for the other dependency models.
neutral
train_101094
Linguistic Space We construct distributional vectors from text through the method recently proposed by , to which we feed a corpus of 2.8 billion words obtained by concatenating English Wikipedia, ukWaC and BNC.
hit@k measures the percentage of images for which at least one gold attribute exists among the top k retrieved attributes.
neutral
train_101095
For example, by chaining even small numbers of direct associations, such as breakfast -pancakes, pancakes -hashbrowns, hashbrowns -potato, and potato -field, a model that does not control for semantic drift may be tempted to answer questions about breakfast venues with answers that discuss wheat fields or soccer fields.
the translation table for t (q|a) is computed using GIZA++ (Och and Ney, 2003).
neutral
train_101096
Other scenarios showed similar trends in our preliminary experiments.
embedding algorithms suggest some natural hyperparameters that can be tuned; many of which were already tuned to some extent by the algorithms' designers.
neutral
train_101097
For all models, we normalize the embeddings so that the L-2 norm equals 1, which is important in measuring semantic similarity via inner product.
for unsupervised training on large scale raw texts (language modeling) we train fCT so that phrase embeddings -as composed in Section 2 -predict contextual words, an extension of the skip-gram objective (Mikolov et al., 2013b) to phrases.
neutral
train_101098
The performance of RNN with 25dimension embeddings is too low so it is omitted.
pre-trained Word Embeddings For methods that require pre-trained lexical embeddings (FCT with pre-training, SUM (Section 5), and the FCT and RNN models in Section 6) we always use embeddings 2 trained with the skip-gram model of word2vec.
neutral
train_101099
We evaluate the perplexity of language models that include lexical embeddings and our composed phrase embeddings from FCT using the LM objective.
table 5 summarizes these datasets and shows examples of inputs and outputs for each task.
neutral