id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_100900
One possible explanation for comparably low performance of most similar assertion is that the past assertions are not sufficient to make a meaningful prediction.
we attribute this to the systematic error made by the SVM when trying to 10 https://github.com/muchafel/judgmentPrediction predict the similarity of assertions that have a negative agreement score.
neutral
train_100901
We will refer to such utterances as assertions.
with an accuracy of about 85% the most similar user strategy performs best.
neutral
train_100902
In Figure 1, we show this case for binary (yes/no) predictions on individuals and argue that this can be also generalized to probabilistic predictions on groups of people.
we capture the wording of assertions by different ngram features.
neutral
train_100903
up-or downvote social media posts.
for the SNN, the predictions follow a similar pattern: resembling a learning curve, the performance increases rapidly with increasing n, but then plateaus from a certain number of assertions.
neutral
train_100904
Furthermore, the entity embeddings tend to be very similar to the embeddings of relations in which they frequently participate.
we applied L 2normalization on the generated embeddings.
neutral
train_100905
Please note that a direct comparison with IKRL (Paper) is not possi- ble since we do not have access to the same set of negative samples.
such a representation suffers from limited discriminativeness and can be considered a main source of error for different KG inference tasks.
neutral
train_100906
We have illustrated the techniques by looking at the roles in the TRIPS framework.
semantic roles have a long history, originating in linguistics as thematic roles (e.g., Fillmore, 1968;Dowty, 1991) and widely adopted in computational linguistics for semantic representations because of their compatibility with frame-based and graph-based (i.e., semantic networks) representations of meaning.
neutral
train_100907
Note that based on the definitions of the roles, it is common that the same role appears more than once in a sentence.
for example, VerbNet would treat "the table" as the DESTINATION without explicit representation of the spatial relation (under).
neutral
train_100908
There is an argument to this verb that is the judgement, but this is not signaled by an elided argument or an indefinite pronoun.
loosely speaking, when +causal obtains (for an AGENT o ag ), there exists some scale sc * such that a change of o ag on the sc * scale would entail a change of the AFFECTED o aff on the sc _E scale.
neutral
train_100909
For short-text classification we use the Stanford sentiment treebank (TR) 3 , customer reviews dataset (CR) (Hu and Liu, 2004), subjective dataset (SUBJ) (Pang and Lee, 2004) and movie reviews (MR) (Pang and Lee, 2005).
to address the feature sparseness problem encountered in short-text and cross-domain classification tasks, we propose a novel method that computes related features that can be appended to the feature vectors to reduce the sparsity.
neutral
train_100910
To overcome feature-sparseness in training and test instances, we expand features that are cores by their corresponding peripheral sets.
can be used to compute the weights assigned to the edges.
neutral
train_100911
For short-text classification (Figure 2a), the accuracy increases for k ≥ 100 (TR and CR obtain the best for k = 1000).
we decompose a feature-relatedness graph into core-periphery (CP) structures, where a core feature (a vertex) is linked to a set of peripheries (also represented by vertices), indicating the connectivity of the graph.
neutral
train_100912
Then, the detected peripheral vertices may be densely interconnected because they belong to the same community.
we use the Kernighan-Lin's algorithm (Kernighan and Lin, 1970) to find a good (but generally a suboptimal) solution.
neutral
train_100913
The contexts we care about are those that are shared.
depending on whether we are talking about bed sizes, these two items are either closely related or completely unrelated, and thus context dependent.
neutral
train_100914
Experiments on the development set showed that target word BERT representations and USE sentence embeddings are the best-suited for WiC.
following this procedure, context2vec produces a ranking of candidate substitutes for each target word instance in the Usim, CoInCo and WiC datasets, according to their fit in context.
neutral
train_100915
Apart from featurization, loss functions and normalization of probability are also design choices available.
we can featurize both inputs and outputs -in our case, contexts and words.
neutral
train_100916
However, we hypothesize that each relation may contain useful information about the others, and training on only one relation inevitably neglects some relevant information.
in the multitask setting, relations are presented to the model in the order they are listed in the result tables within each batch.
neutral
train_100917
Our representation transfer framework is very similar to their approach, although we use a simpler loss function.
mono-lingual evaluation of sentence representation models can be found in Hill et al.
neutral
train_100918
We surmise that only a subset of semantic features were learned by the InferSent objective given the specific characteristics of the SNLI training sets.
people are gathered by the water.
neutral
train_100919
(2018a) decomposes the Parsing Time Normalizations task into two subtasks: a) time entity identification using a character-level sequence tagger which detects the spans of characters that belong to each time expression and labels them with their corresponding time entity; and b) time entity composition using a simple set of rules that links relevant entities together while respecting the entity type constraints imposed by the SCATE schema.
pre-trained language models (LMs) such as ELMo (peters et al., 2018), ULMFiT (Howard and Ruder, 2018), OpenAI GpT (Radford et al., 2018), Flair (Akbik et al., 2018) and Bert (Devlin et al., 2018) have shown great improvements in NLp tasks ranging from sentiment analysis to named entity recognition to question answering.
neutral
train_100920
The SemeEval task description paper (Laparra et al., 2018b) has more details on dataset statistics and evaluation metrics.
there is a need to study pre-trained contextualized character embeddings, to see if they also yield improvements, and if so, to analyze where those benefits are coming from.
neutral
train_100921
The structure-based representation, however, seems to capture bot variability more effectively, i.e.
all hidden layers consists of ReLU activation units, and are regularized using dropout rate of 0.5.
neutral
train_100922
During training, the input to this network is the representation of a conversation (either contentbased or structure-based), and the ground truth is a one-hot vector of the bot that handled this conversation.
in addition, if the bot was annotated as not-production, the experts had to provide a list of reasons for their choice (e.g., repeating users ids, repeating bot response, etc.).
neutral
train_100923
We maintain a 60%-40% train-test split over the corpus, and average accuracy over 5 runs, varying the number of topics between 5 and 25.
using Tf-Idf weighting in conjunction with a Word2Vec representation helps alleviate issues that the individual representations face when used independently.
neutral
train_100924
We therefore ran a second set of qualitative experiments in which LDA topics were used to derive clusters of similar documents.
first, we associate each word w ∈ W with a set of documents, D(w), based on their similarity in the embedding space.
neutral
train_100925
We presented MCScript2.0, a new machine comprehension dataset with a focus on challenging inference questions that require script knowledge or commonsense knowledge for finding the correct answer.
some answers can still be read off the text, if other parts of the texts contain the same information as the hidden target sentences.
neutral
train_100926
As a more balanced and relevant test set, we use noun pairs (666 total) from the SimLex999 semantic similarity dataset (Hill et al., 2015).
the employed WSD algorithm starts with building a graph where the nodes are the WordNet synsets of the words in the input sentence.
neutral
train_100927
We did not compare our approach to the GraphSAGE embeddings (Hamilton et al., 2017b) and Graph Convolutional Networks (Schlichtkrull et al., 2018), since they make use of input node features, which are absent in our setup.
additionally, graph embeddings can be of importance in privacy-sensitive network datasets, since in this setup, explicitly storing edges is not required anymore.
neutral
train_100928
Particularly, in the case of WordNet, each node (synset) has 36 synsets in its V 2 on average, and half of the nodes do not have any neighbors at all.
raw WordNet similarities are still the best, but the path2vec models are consistently the second after them (and orders of magnitude faster), outperforming other graph embedding baselines.
neutral
train_100929
We use negative sampling to form a training batch B adding n negative samples (s ij = 0) for each real (s ij > 0) training instance: each real node (synset) pair with zero similarities, where v k and v l are randomly sampled nodes from V .
we additionally used human-annotated semantic similarities from the same SimLex999.
neutral
train_100930
We consider the domain adversarial training network (Ganin et al., 2016) (DANN) on the user factor adaptation task.
we want to keep the feature space such that the features are predictive of document classes in a way that is invariant to demographic shifts.
neutral
train_100931
Different demographic groups can show substantial linguistic variations, especially in online data (Goel et al., 2016;Johannsen et al., 2015).
because of this, the topics all look very similar and are hard to interpret, so we do not show the topics themselves.
neutral
train_100932
In several cases, the algorithm has discovered "parallel" relations.
many datasets suffer from paths sparsity, lack of enough paths connecting source target pairs, resulting in poor performance for many relations.
neutral
train_100933
informative about causality or related probabilities), then they should be in the same form as that knowledge: the alternative, to keep rule-like topoi apart from knowledge about rule-based(ish) systems, is counter-intuitive.
both intuitively and according to experimental evidence, positive acceptability judgements can still be made without fore-knowledge of such a connection.
neutral
train_100934
Compared to traditional pair-wise temporal relation representations, temporal dependency trees facilitate efficient annotations, higher inter-annotator agreement, and efficient computations.
our training data consists of two parts.
neutral
train_100935
The performance on *-*-idf runs suggest that random word selection is as good or better.
from Table 1, we observe that adding noise is necessary for good performance, as we see that the various noise strategies consistently improve performance over the baseline on both the datasets.
neutral
train_100936
So are other image transformations such as translation, rotation, removing color and so on.
to reduce computational overhead, we filtered out entity mentions which were greater than length 5 from the Ontonotes dataset (4 respectively for CoNLL), and contexts which were greater than length 59 or smaller than length 5 (40 and 3 respectively for CoNLL).
neutral
train_100937
We say that a mention is a person-mention if the head of the mention is a PER named entity, and we say that the name of the person-mention is the PER named entity that is its head.
we restrict our attention to GPE entities that satisfy the following requirements: (1) they occur in the GeoNames database and (2) they are not countries.
neutral
train_100938
For each dataset, highest score for each dataset is bolded and is underlined if the difference between it and the other model's score is statistically significant (p < 0.20 per a stratified approximate randomization test similar to that of (Noreen, 1989) the system must determine whether a given pronoun refers to one, both, or neither of two given names.
(Again, we use the gold NER to identify GPE names in the CoNLL text.)
neutral
train_100939
first names with male proportion less than or equal to 0.5 in the gazetteer) by F. We remove all names occurring in training from L, M, and F. We use the spaCy dependency parser (Honnibal and Johnson, 2015) to find the heads of each mention.
changing the name of an organization while ensuring that it is compatible with nominals in the cluster is nontrivial without a finer semantic typing.
neutral
train_100940
We explore three classification models: logistic regression, instance-based learning, and prototypical neural networks (Snell et al., 2017).
for each event expression the semantic classifier generates 50 semantic features.
neutral
train_100941
Our findings indicate that debiasing methods that need explicit set of words to be debiased are unlikely to be effective in removing all stereotypelike data.
we collected human judgments about a person's Big Five personality traits formed solely from information about the occupation, nationality or a common noun description of a hypothetical person.
neutral
train_100942
(2019) have demonstrated the capacities of contextualized word embeddings across a wide variety of tasks, including SPRL.
model Ablations All ablation experiments are conducted with markerEB in the multi-label formulation.
neutral
train_100943
The significance test results summarized in Table 4 are unambiguous: for many proto-role properties, ensembling helps to improve performance significantly (SPR1: 14 18 cases; SPR2: 7 14 cases; significance level: p < 0.05).
in addition, our model as a single model instance (Marker) is outperformed by RUD'18's approach both in the regression and in the multi-label setup.
neutral
train_100944
We thank our three anonymous reviewers for helpful suggestions.
downward monotonicity inferences are interesting in that they allow to replace a phrase with a more specific one and thus the resulting sentence can become longer, yet the inference is valid.
neutral
train_100945
Notice that this is not a case of epistemic uncertainty.
of our treatment of VP negation, the universal quantifier ("all") and the existential quantifier ("some") are not interdefinable, as they are in classical logic.
neutral
train_100946
Other research has also identified demographic keys closely associated with vulgarity: Wang et al.
that data were designed for use in social science research, not natural language processing research, and thus there were several challenges in working with the data as they were collected, including: • The comments were saved in PDFs, and the metadata referenced each comment by a number that was drawn (not typed) into the PDF beside the comment.
neutral
train_100947
In particular, to extract information, Clause-Based Open IE systems (Del Corro and Gemulla, 2013;Angeli et al., 2015;Schmidek and Barbosa, 2014) reduce a complex sentence into simpler sentences using linguistic patterns.
we use a pretrained Semantic Role Labelling model 4 based on a Bi-directional LSTM network (He et al., 2017) with pre-trained ELMo embeddings (Peters et al., 2018).
neutral
train_100948
A manual examination of the results also showed the same trend.
nevertheless, it is important to assess how our system is performing.
neutral
train_100949
3 At prediction time, in both the discriminative and the generative cases, we find the most likely label sequence using Viterbi decoding.
while widely differing in the specific model structure and learning objective, all of these approaches achieve excellent results.
neutral
train_100950
While widely differing in the specific model structure and learning objective, all of these approaches achieve excellent results.
there is considerable variation between languages: Spanish has the highest coverage with over 90%, while Turkish, an agglutinative language with a vast number of word forms, has less than 50% coverage.
neutral
train_100951
As for the HMM, Y(x) is not necessarily the full space of possible tag-sequences; specifically, for us, it is the dictionarypruned lattice without the token constraints.
as we will observe next, coupling the dictionary constraints with token-level information solves this problem.
neutral
train_100952
We use sentences from the TACOS corpus and record their timestamps.
correspond to low-level actions, and each sentence is aligned with the last of its associated low-level actions.
neutral
train_100953
In this way, we replace z c by several z E i that can be handled by our bounding strategy.
the first three types of features are firstly introduced by McDonald et al.
neutral
train_100954
Our work differs in the use of joint learning and inference approaches.
learning parameters were tuned using cross-validation on the training set: the margin δ is set to 1, the GEN lEX margin δ l is set to 2, we use 6 iterations (8 for experiments on SAIl) and take the 250 top parses during lexical generation (step 1, Figure 5).
neutral
train_100955
(2011) used a graphical model semantics representation to learn from instructions paired with demonstrations.
figures 3 and 4 include a sample of our seed lexicon.
neutral
train_100956
Our inference includes an execution component and a parser.
we show that, given only a small seed lexicon and a task-specific executor, we can induce high quality models for interpreting complex instructions.
neutral
train_100957
We also separated clitics from their base word.
first, they switched from MLE to a Bayesian approach, estimating a probability distribution over model parameters θ and dependency trees T given the training corpus C and a prior distribution α over models: Headden et al.
neutral
train_100958
(2010), both trained and tested on the length 10 training data from the CoNLL-X Shared Task.
a direct comparison between dependency treebanks and dependencies produced by CCG is more difficult (Clark and Curran, 2007), since dependency grammars allow considerable freedom in how to analyze specific constructions such as verb clusters (which verb is the head?)
neutral
train_100959
The algorithm first initializes G to Φ and X to SU .
existing supervised approaches seldom exploit the intrinsic structure among sentences.
neutral
train_100960
The comparative results in Table 5 clearly show that while our vanilla seed lexicon performs comparably to off-the-shelf lexicons on our data, the paraphraser-expanded lexicon with sentitment profiles outperforms OpinionFinder, General Inquirer, and SentiWordNet.
tESt set contains the 43 agreed double-annotated sentences, and additional 238 sampled from the 500 single-annotated sentences, 281 sentence in total.
neutral
train_100961
We produce 4 features, two for each polarity: (1) the number of words such that 0 ≤p pos w < 0.4; (2) the number of words such that 0.4 ≤p pos w ≤ 1; similarity for the negative polarity.
we implemented a WordNet (Miller, 1995) based expansion that uses the 3 most frequent synonyms of the top sense of the seed word (WN-e).
neutral
train_100962
In the following, we provide a more formal characterization of the strong and weak generative power of ITSG with respects to context-free grammar (CFG) and TSG.
6 This restriction is compensated for by the existence of the Forward Substitution operation, which has no analog in the Earley algorithm.
neutral
train_100963
A syntax-based noise model may achieve better performance in detecting and correcting child word drops.
we then compute the total arc weight of all paths through F ST train by relabeling all input and output symbols to and then reducing F ST train to a single state using epsilon removal (Mohri, 2008).
neutral
train_100964
We run EM for 100 iterations, at which time the log likelihood of all sentences generally converges to within .01.
our model has 6,718 parameters, many more than the ESL model's 187.
neutral
train_100965
a ↷ b denotes that a tree b is attached to a tree a. rated in standard graph-based models.
• PP-Attachment features: when a parent word is a preposition, we define tri-gram features with the parent word and POS tags of grand parent and the rightmost child.
neutral
train_100966
This issue of "misleading" WP links becomes even more prominent when the links from the full articles are used as edges (LM); while the increase in recall is relatively small the precision drops substantially.
this is by far the high-est value across all resources (see table 4).
neutral
train_100967
Wikipedia (WP) is a freely available, multilingual online encyclopedia.
intuitively not all links in an article are equally meaningful.
neutral
train_100968
E.g., the two senses of letter "The conventional characters of the alphabet used to represent speech" and "A symbol in an alphabet, bookstave" (taken from WN and WKT, respectively) are clearly equivalent and should be aligned.
for the SR configuration, we decided to retain only the category links and the links within the first paragraph of the article.
neutral
train_100969
Though there is some empirical work on competitive assignments in the computer science education literature (Lawrence, 2004;Garlick and Akl, 2006;Regueras et al., 2008;Ribeiro et al., 2009), they generally measure student satisfaction and retention rather than the more difficult question of whether such assignments actually improve student learning.
we ensured that our data met this criterion.
neutral
train_100970
This transformation does not exactly preserve meaning, but still captures the most important relations.
from the following sentences: All of these require knowledge of lexical semantics (e.g.
neutral
train_100971
They do not require any knowledge of lexical semantics, meaning we can evaluate the formal component of our system in isolation.
almost all errors are due to incorrectly predicting unknown -the system makes just one error on yes or no predictions (with or without gold syntax).
neutral
train_100972
The terminal features are indicator features for each lexicon entry, as shown in the top row of Figure 4.
1 Weakly supervised training estimates parameters for LSP using queries annotated with their denotations in an environment (Figure 1c).
neutral
train_100973
In this example, the denotation is the set of "things to the right of the blue mug," which does not include the blue mug itself.
each held-out sentence z i is parsed to produce a logical form , which is marked correct if it exactly matches our manual annotation i .
neutral
train_100974
These definitions and related prepositions provide a starting point to identify senses that can be merged across prepositions.
for the governor and object, we have a set of type labels, comprised of one element for each type category.
neutral
train_100975
All the probabilities in the above formulas are computed by Equation 3.
this subject will be one of our future work topics.
neutral
train_100976
With the same settings as before, we run the Gibbs sampler for 1000 iterations and utilize the final U-tree structure to build a string-to-tree translation system.
with SCFG, we have to discard all the internal nodes (i.e., flattening the Utrees or rules) to express the same sequence, leading to a poor ability of distinguishing different U-trees and production rules.
neutral
train_100977
For a formal definition, see Johnson et al.
the template that obtained the highest score is then chosen.
neutral
train_100978
The most notable differences are in Turkish, where all models perform far worse on the test than dev set.
first, we trained 5 samplers on the 50k training set with labelled set added, and used the labelled data to choose the best template for each inferred grammar.
neutral
train_100979
The next two rules collect a new dependent with a gap and embed it within the gap of our lower tree, creating a new dependency.
oEi; j ; p 0 ; q; j =LR > oEp; p 0 C 1; ; ; p 0 C 1 0 oEi; j ; p; q; j U˚a j !
neutral
train_100980
Alternatively, one can easily also use fully-automatic bootstrapping techniques based on seed word pairs (Hearst, 1992;Chklovski and Pantel, 2004;Yang and Su, 2007;Turney, 2008;Davidov and Rappoport, 2008).
good but not great) to reveal order information between a pair of adjectives (Sheinman and Tokunaga, 2009).
neutral
train_100981
Gibbs sampling is another effective algorithm for unsupervised learning problems.
aERs on the testing set are listed in Table 3 6.
neutral
train_100982
The head-modifier cohesion term ℎ is used to penalize the distortion probability according to relationships between the head node and its children (modifiers).
finally, we conclude this paper and mention future work in Section 7.
neutral
train_100983
The model assumes each source word is assigned to exactly one target word, and defines an asymmetric alignment for the sentence pair as 1 = 1 , 2 , … , , … , , where each ∈ [0, ] is an alignment from the source position j to the target position , = 0 means is not aligned with any target words.
in our model, the word distance is calculated based on the previous node in BUTorder rather than the previous word in the original sentence.
neutral
train_100984
They do not overlap each other, so the head-modifier cohesion is maintained.
the training procedure is very timeconsuming, and they trained the model with only 100 hand-annotated sentence pairs.
neutral
train_100985
We can use auxiliary symbols to denote the head phrase position in a CFG rule.
(2011) respectively evaluated their graph-based and transitionbased parsers; Zhang and Clark (2011) evaluated CoNLL-test UAS (Li et al., 2012) 83.23% Graph+Tran+FlatH+BinH+BinLR 87.23% CTB5-test UAS (Li et al., 2011) 80.79% (Hatori et al., 2011) 81.33% (Zhang and Clark, 2011) 81.21% Graph+Tran+FlatH+BinH+BinLR 84.65% Table 6: UAS of different models on the test data.
neutral
train_100986
Before proceeding to the SMT experiments, we evaluate the performance of the WaW reordering model in isolation.
6 According to how the DL, pruning parameters, and ϑ are set, we can actually aim at different targets: with a low DL, loose pruning parameters, and ϑ=0 we can try to speed up search without sacrificing much translation quality.
neutral
train_100987
This is an important result, considering that the jump length, strongly correlating with the jump likelihood, is not directly known to our model.
the need of specific reordering rules makes the method harder to apply to new language pairs.
neutral
train_100988
As regards efficiency, the new model makes decoding time increase by 8%.
we report statistically significant improvements in the reordering of verbs, which is where the impact of our method is expected to concentrate (+0.7, +0.8 and +1.0 KRS-V on eval08-nw, eval09-nw and vs-09, respectively).
neutral
train_100989
We do this calculation only for certain types of syntactic relations-a) nouns and their adjective modifiers, b) verbs with adverb modifiers, c) adjacent nouns in a noun phrase and d) verb and subject pairs.
we identified the relations for texts in our corpus using the AddDiscourse tool (Pitler and Nenkova, 2009).
neutral
train_100990
Several other properties of science writing could also be relevant to quality such as the use of humor, metaphor, suspense and clarity of explanations and we plan to explore these in future work.
if it appears only with 'who' in all noun phrases, it is animate.
neutral
train_100991
For each pair, we generate function similarity features, Fun(x i , x j , k, p), where k and p vary as they did with domain space.
section 6 discusses the implications of the results.
neutral
train_100992
We utter about one metaphor for every ten to twenty-five words, or about six metaphors a minute (Geary, 2011).
this requires extensive knowledge.
neutral
train_100993
c. Max idf of matching base form: max idf of a word corresponding to a matching base.
furthermore, users could provide other kinds of feedback such as moving items between clusters, allowing for the opportunity for relevance feedback; classifier uncertainty could be surfaced to the user, and active learning approaches could be used to ask the teacher for specific labels.
neutral
train_100994
This process is iterated until the clusters converge.
finally, Pulman and Sukkarieh (2005) compare hand-authored patterns with machine learned patterns based on simple word-based features, and find that the hand-crafted patterns perform better.
neutral
train_100995
We show results in terms of grading progress with a small "budget" of human actions, both from our method and an LDA-based approach, on a test corpus of 10 questions answered by 698 respondents.
instead of classifying individual answers as being right or wrong, we propose to automatically find groupings and subgroupings of similar answers from a large set of answers to the same question, and let teachers apply their expertise to mark the groups.
neutral
train_100996
If t p is correct, we simply apply it and move to t p (c) without changing the model parameters (line 11).
they are still only guaranteed to be correct on a subset of the possible configurations.
neutral
train_100997
(2006), using a greedy arc-eager transitionbased system with pseudo-projective parsing.
interestingly, however, for both these languages there is nevertheless a small improvement in the joint PM score, indicating that the JOiNT model in general does a better job at selecting a valid complete morphological description than the SiMPLETAG model.
neutral
train_100998
Turning to the results for SIMPLETAG, we note that our results are consistent with those reported by Bohnet and Nivre (2012), with small but consistent improvements in POS and UAS/LAS (and in the compound metrics PM and PMD) for most languages.
we use the open-source morphological analyzer OMorFi (Pirinen, 2011) and word clusters derived from the entire Finnish wikipedia.
neutral
train_100999
The story is more complicated for seen words with known translations: if we limit ourselves to "high confidence" translations, there is a lot to be gained by improving the scores in translation models.
but this is based on intersected phrase tables, from which we removed seen and sense distinctions, and in which there is no competition between phrases from the OLD and NEW systems.
neutral