id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_92100
Existing research efforts mostly regard this joint task as a sequence labeling problem, building models that can capture explicit structures in the output space.
we consider the following baselines: • Pipeline (Zhang et al., 2015) and Collapse (Zhang et al., 2015) both are linearchain CRF models using discrete features and embeddings.
neutral
train_92101
They modelled the latent sentiment scope based on CRF with latent variables, and achieved the best performance among all the existing works.
the difference between EI and SA-CRF is that our model EI captures flexible explicit structures in the output space table 3: Results on subjectivity as well as non-neutral sentiment analysis on the Spanish dataset.
neutral
train_92102
This procedure aims to adjust the contribution of each capsule based on the high-level capsule (i.e.
high-level capsules) to maximize the sum of the weighted log probabilities that the Gaussian would generate the datapoints (i.e.
neutral
train_92103
Our ultimate training goal is to minimize the loss function with parameters θ = {θ f , θ y , θ g , θ } as follow: Whereλ 1 , λ 2 and λ 3 are the weight parameters to balance the importance of losses between the emotion detection and the two personal attribute discriminators.
moreover, the adversarial discriminators are much more effective than the attention mechanisms.
neutral
train_92104
This information gap at both ends of the decoder causes the decoder needs to bear both the target sentiment addition and content reconstruction.
decoder: Given the hidden representations , the decoder generates words sequentially.
neutral
train_92105
It not only has a variety of practical applications, e.g., fighting offensive language on social media (dos Santos et al., 2018), but also serves as an ideal testbed for controllable text generation.
in addition, we train the sentiment classifier (Kim, 2014) to filter samples with the category confidence below 0.8.
neutral
train_92106
In the second step, the decoder does not include the sentiment information in its source input, meaning that the source information is incomplete for generating a sentimenttransferred sentence.
the training of RL is unstable and sensitive to hyper-parameters.
neutral
train_92107
Indeed whereas such resources have been gathered in the case of textual data and can be used to deeply understand the expression of opinions (Wiebe et al., 2005;Pontiki et al., 2016), the different attempts at annotating multimodal reviews have shown that reaching good annotator agreement is nearly impossible at a fine grained level.
this strategy and the following one are introduced to test whether the order in which the tasks are introduced has some importance on the final scores.
neutral
train_92108
(2018a): we represent each sentence by taking the average of the feature representation of the Tokens composing it.
the price to pay to gather reliable data is the definition of an annotation scheme focusing on coarse grained information such as long segment categorization as done by Zadeh et al.
neutral
train_92109
Furthermore, the text used in these datasets often uses the same set of words to express clearly contrastive sentiments, thereby setting up groups or domains of distinct word use within a single data set.
accordingly, the KCCa domain adapted word embeddings are indeed capturing something significant about the language usage in the domain.
neutral
train_92110
The domain adapted word w i,DA is then passed as input to the CNN encoder as shown in the figure.
a model that generates interpretable, structured sentence embeddings using Self-attention mechanism (Lin et al., 2017).
neutral
train_92111
These auxiliary tasks focus on directly enhancing domain-invariant features, while ours strips domain-specific features to distill domaininvariant features.
this indicates that DIFD can learn purer domain-invariant features than ASC+At.
neutral
train_92112
The statistics of these three datasets are shown in Table 1.
next, we will introduce the components of our method in detail.
neutral
train_92113
Table 2 shows the performance comparison of different approaches.
given a review document with a clause sequence and an aspect, the high-level policy decides whether a clause mentions this aspect.
neutral
train_92114
In this paper, to simulating the steps of analyzing aspect sentiment in a document by human beings, we propose a new Hierarchical Reinforcement Learning (HRL) approach to DASC.
a well-behaved approach should discard noisy clauses for a specific aspect during model training.
neutral
train_92115
Such a response makes several assumptions which are clearly violated here, such as the involvement of government.
finally, asking again about a claim's mention enables result validation, when the answer is known a-priori with high confidence.
neutral
train_92116
2018a), are orthogonal to our proposed attribute representation and injection method.
the first task is Sentiment Classification, where we are tasked to classify the sentiment of a review text, given additionally the user and product information as attributes.
neutral
train_92117
We expect this paper to serve as the first investigatory paper that contradicts to the positive results previous work have seen from the bias-attention method.
it has been well perceived, that using only the user and product attributes to generate text is unreasonable, since we expect the model to generate coherent texts using only two vectors.
neutral
train_92118
One crucial emotion cause cue is sentiment words since emotion causes usually express a certain sentiment polarity by sentiment words.
λ 1 and λ 2 are hyper-parameters.
neutral
train_92119
Furthermore, for the English dataset, our proposed model has balance performance in precision and recall.
rHNN manages to boost the performance by 3.06% in F-measure compared to Lamb-daMArT, which exhibits that by restraining the parameters with knowledge-based regularizations, rHNN is better to identify the emotion cause cues than feature engineering.
neutral
train_92120
The arguments collected had to have 8 -36 words, aimed at obtaining efficiently phrased arguments (longer/shorter arguments were rejected by the UI).
assessing argument quality has driven practitioners in a plethora of fields for centuries from philosophers (aristotle et al., 1991), through academic debaters, to argumentation scholars (Walton et al., 2008).
neutral
train_92121
To overcome these limitations, we propose a novel task: performing fine-grained analysis of texts by detecting all fragments that contain propaganda techniques as well as their type.
(2) and (3) into an F 1 -measure, the harmonic mean of precision and recall.
neutral
train_92122
Is it right that they should pay for their son's murderer to be fed and housed?"
we designed an annotation schema of 18 propaganda techniques, and we annotated a sizable dataset of documents with instances of these techniques in use.
neutral
train_92123
In this paper, we introduce a recurrent neural network based approach for the multi-modal sentiment and emotion analysis.
we argue that the encoded representation learns the interaction between the text and acoustic modalities.
neutral
train_92124
For each dataset, we report three best systems for the comparisons 2 .
this performance degradation suggests that the IIM module is, indeed, an important component of our proposed architecture.
neutral
train_92125
Similar work on feature-level fusion based on self-attention mechanism is reported in (Hazarika et al., 2018).
we employ another IIM to capture the interaction between the visual-text.
neutral
train_92126
Our observations in this experiment are the following: • CNNs fail to fully incorporate sequential information as performance on random ordering and correct ordering are marginally near each other.
we employ two different architectures of SCARN model since the datasets vary in size.
neutral
train_92127
These approaches provided strong baselines for text classification (Wang and Manning, 2012).
here we make a distinction between the semantic knowledge of a word and the task specific knowledge of a word.
neutral
train_92128
We observe that SCARN significantly outperforms concat-SCARN on all the datasets.
this does not relate the same way of selecting most relevant features from convoluted features in texts.
neutral
train_92129
This leads to another question, if the feature selected by max pooling will always be the most important feature of the input or otherwise.
a new feature vector C i is obtained for each word w i after convolving with K filters.
neutral
train_92130
After computing the attention scores, the final context representation v d is computed as follows: We then concatenate the context representation with the target claim representation and pass it to the classification layer to predict the quality.
claims with higher impact parents are more likely to be have higher impact.
neutral
train_92131
One recent study in NLP has investigated the role of argument sequencing in argument persuasion looking at (Hidey and McKeown, 2018) Change My View 1 , which is a social media platform where users post their views, and challenge other users to present arguments in an attempt to change their them.
this structure enables us to study the characteristics of impactful claims, accounting for the effect of the pragmatic context.
neutral
train_92132
Incorporating the parent representation only along with the claim representation does not give significant improvement over representing the claim only.
we find that the BERT model with claim only representation performs significantly better (p < 0.001) than the baseline models.
neutral
train_92133
Based on this assumption we aggregate only the aspect embeddings using a max pool with the aim to retain most of the information.
most recent methods have focused on leveraging neural networks (Chen et al., 2017;Gu et al., 2018;majumder et al., 2018;Fan et al., 2018a;Xue and Li, 2018;Huang and Carley, 2018;Zheng and Xia, 2018) which model representations automatically.
neutral
train_92134
We conduct an experiment to demonstrate that the performance of our proposed models, namely CDT and ASP-GCN, depend on the number of layers of the GCN.
tNet on the other hand measures the association between 'Sangria' and 'good' in both directions to identify the correct 'good'.
neutral
train_92135
We perform a subsequent run of ASP-GCN on the same sentence s to extract a final embedding, but in this instance we conceal a specific input word w. We conceal w by mapping it to the zero vector before applying ASP-GCN on s. As a result a final embedding h (s\w) is generated for s. If h s = h (s\w) , the word w has no impact on the representation h s .
61772059, 61421003), by the Beijing Advanced Innovation Center for Big Data and Brain Computing (BDBC), by State Key Laboratory of Software Development Environment (No.
neutral
train_92136
Even with simple architectures, we find that ASP-BiLSTM and ASP-GCN have competitive performance with the recent models on benchmark datasets.
together with node embeddings modeled by BiLStMs, we can exploit a GCN capable of operating directly on graphs.
neutral
train_92137
According to our empirical analysis, the percentage of repetitive words drops from 8.3% to 6.5% by our proposed methods on the IWSLT14 De-En test set, which is a 20%+ reduction.
during training, the predictions at different positions can be estimated in parallel since the ground truth pair (x, y) is exposed to the model.
neutral
train_92138
As we explain in §3, our investigation requires a definition of lexical semantics that is independent of grammatical gender.
for example, almost all Spanish nouns ending in -a are feminine, whereas Spanish nouns ending in -o are usually masculine.
neutral
train_92139
The MIL approach for document classification is illustrated in Figure 1.
for 159 cities that have representation of the three crimes both in Patch and FBI crime reports, we calculate the ratio of Patch-based predictions to FBI reports for each crime.
neutral
train_92140
We also appreciate anonymous reviewers for constructive review comments.
(2016)) only use the given sentences, which suffer from the low efficiency problem in capturing long-range dependency; dependency tree based methods utilize the syntactic relations (i.e., arcs) in the dependency tree of a given sentence to more effectively capture the interrelation between each candidate trigger word and its related entities or other triggers.
neutral
train_92141
In this paper, we propose a Hierarchical Modular Event Argument Extraction (HMEAE) model, to provide effective inductive bias from the concept hierarchy of event argument roles.
taking Figure 1 as an example, "Seller" is conceptually closer to "Buyer" than "time-within", because they share the same superordinate concepts "Person" and "Org" in the concept hierarchy.
neutral
train_92142
3The argument role classifier relies on the instance embedding and the role-oriented embedding to estimate the probability of a certain argument role for the instance.
we design a neural module network for each basic unit of the concept hierarchy, and then hierarchically compose relevant unit modules with logical operations into a role-oriented modular network to classify a specific argument role.
neutral
train_92143
The whole procedure is trained end-to-end, with the model learning simultaneously how to identify important links between spans and how to share information between those spans.
for each variation, we study the effect of integrating different task-specific message propagation approaches.
neutral
train_92144
Following the experimental settings and evaluation metrics in Bai and Zhao (2018), we use two most-used splitting methods of PDTB data, denoted as PDTB-Lin (Lin et al., 2009), which uses sections 2-21, 22, 23 as training, validation and test sets, and PDTB-Ji (Ji and Eisenstein, 2015), which uses 2-20, 0-1, 21-22 as training, validation and test sets and report the overall accuracy score.
these models typically use pre-trained semantic embeddings generated from language modeling tasks, like Word2Vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014) and ELMo (Peters et al., 2018).
neutral
train_92145
(2018) fed external world knowledge (ConceptNet relations and coreferences) explicitly into MAGE-GRU (Dhingra et al., 2017) and achieved improvements compared to only using the relational arguments.
it is a corpus annotated over 24 open access fulltext articles from the GENiA corpus (Kim et al., 2003) in the biomedical domain.
neutral
train_92146
(2017) (i.e., short binary patterns, why-QA system's answers, and sentences with clue terms).
(2013)'s CRF-based causality recognizer and used them as a causality-rich corpus while assuming that the cause and effect are sentence pairs for the next-sentence prediction task (9,783,691 pairs or 19,567,382 sentences).
neutral
train_92147
However, this effect was limited.
as the parameter settings of the pre-training for all of the BERT models, we followed the BERT BaSE settings 6 (Devlin et al., 2019) except for a batch size of 50.
neutral
train_92148
While this yielded a model that was quite well-matched to the EEG modeling task, its training data was confined to just 1543 sentences (24K words).
the importance of domain adaptation in NLP has been well-established in earlier work (see Daumé III, 2007 and footnote 1), including applications to parsing (Sarkar, 2001;McClosky et al., 2006;Søgaard and Rishøj, 2010;Weiss et al., 2015).
neutral
train_92149
In contrast, the development of distributional modelling techniques and the availability of vast text corpora have allowed researchers to construct effective vector space models of word meaning over large lexicons.
in this context, G and F represent GloVe embedding vectors and property norm feature vectors, respectively.
neutral
train_92150
For Feature2Vec, we Kingfisher has wings does fly has a beak has feathers is a bird does eat has a tail does swim has legs does lay eggs Avocado is eaten edible is tasty does grow is green is healthy is used in cooking has skin peel is red is food is a vegetable Door made of metal has a door doors is useful has a handle handles made of wood made of plastic is heavy is furniture does contain hold is found in kitchens Dragon is big large is an animal has a tail does eat is dangerous has legs has claws is grey is small does fly Table 3: Top 10 predictions from CSLB-trained Feature2Vec model.
this comes at the cost of interpretable, human-like information about word meaning.
neutral
train_92151
To this end, our ConVQA Dataset consists of two subsets: 1) a challenging human-annotated set comprised of commonsense-based consistent QA's (shown in Figure 3), and 2) an automatically generated logicbased consistent QA dataset (shown in Figure 2).
man", an appropriate entailed question might be "Is the tennis court empty?".
neutral
train_92152
Our Consistency Checker has a high accuracy of classification on the L-ConVQA test set (90%).
we compare to a number of baseline models to put our CTM results into context: Table 1 shows quantitative results on our L-ConVQA and CS-ConVQA datasets.
neutral
train_92153
Section 4 reports benchmark results.
we tested several state-of-the-art methods for question answering, textual entailment, and reading comprehension on our GeoSQA dataset.
neutral
train_92154
The 22 diagram categories and 81 text templates are not predefined but incrementally induced during the experiment.
we would like to thank all the annotators.
neutral
train_92155
Each category is associated with a set of text templates that are recommended to be used in annotations as far as possible.
finally, we choose the option with the highest score as the answer.
neutral
train_92156
(2017), where the Watson-Paths system is presented to answer questions that describe a medical scenario about a patient and ask for diagnosis or treatment.
compared with GeoSQA, their datasets are small and, more importantly, they ignore diagrams which represent a unique challenge to geographical SQA.
neutral
train_92157
Algorithm 1 achieves perfect accuracy on ToM-easy and ToM.
first-Order Belief B: Where will B look for O?
neutral
train_92158
For this reason, we propose to systematically ask all question types for each generated story.
towards Robust toM Evaluation in QA to increase robustness against such regularities and correlations, we build upon the ideas of toM-bAbi but improve data generation and evaluation in multiple ways.
neutral
train_92159
(3) Quasar-T (Dhingra et al., 2017) and 4SearchQA (Dunn et al., 2017) are leveraged with the official split.
then the vector representation of each word position from BERt encoder is fed into two separate dense layers to predict the probabilities P s and P e (Devlin et al., 2018).
neutral
train_92160
The answer A should be a span which is directly extracted from the passage P. According to most of the works on SQuAD, the task can be simplified by predicting the start and end pointer in the passage (Wang and Jiang, 2016).
• No more than five questions for each passage.
neutral
train_92161
Each Γ n (W) ∈ R |W |×d represents n-gram semantics.
unigram CNN uses 1:1 word matching, and is able to utilize word-based signals as query term weighting value.
neutral
train_92162
This variance is especially high for local IDF, which serves as a strong signal as consistently observed in (Blair-Goldensohn et al., 2003).
contextualizing into word-level representation makes it natural to combine with word-level IDF scores in the model, and also enables indexing (Hwang and chang, 2005).
neutral
train_92163
Non-factoid questions, unlike factoid questions answered by short facts like a word or a phrase, may get answered by a long answer spanning across multiple sentences.
due to a typical length difference between a question and a long answer in our problem setting, most answer words are left unmatched, except a few uni-gram in the answer.
neutral
train_92164
Edge vectors are similarly added when summing over node representations to produce the new output representations.
creating labeled data for this task can be expensive and time-consuming.
neutral
train_92165
Quadruplet loss extends triplet loss to ensure smaller intra-class distances and larger interclass distances.
all our models ignore any interaction between question and answer representations, and use a shared encoder to independently encode questions and answers.
neutral
train_92166
However, although QuaRel contains 2700 qualitative questions, its underlying qualitative knowledge was specified formally, using a small, fixed ontology of 19 properties.
they were then asked to author a situated, 2-way multiple-choice (MC) question that tested this relationship, guided by multiple illustrations.
neutral
train_92167
The control vector c is a randomly initialized learnable vector.
at each hop, a single control vector is used to interpret each memory cell independently.
neutral
train_92168
Combined with a simple transfer learning technique from a large-scale online corpus, our model achieves new state-of-the-art results on the TrecQA and WikiQA datasets.
our proposed techniques do add a significant amount of performance.
neutral
train_92169
Previous paraphrase generation work on QUORA (Gupta et al., 2018; Li et al., 2018) did not mention removing these entries, thus we include them in our experiments for fair comparison.
both full parroting and the forms of partial parroting we use are fully unsupervised.
neutral
train_92170
Full parroting simply uses the input as output (o = i).
for future paraphrase generation research which achieve good evaluation scores, we suggest investigating whether their methods or models act differently from simple parroting behavior.
neutral
train_92171
Annotators assigned scores on a fivepoint scale, with 1 and 5 indicating that the summary is bad or good with respect to a specific Q.
differences among systems are thus small in this respect, and although SUM-QE predicts scores in this range, it struggles to put them in the correct order, as illustrated in Figure 3.
neutral
train_92172
Further, at generation time, they heavily rely on rejection sampling to produce quatrains which satisfy any valid rhyming pattern.
we show that the discriminator induces an accurate rhyming metric and the generator learns natural rhyming patterns without being provided with phonetic information.
neutral
train_92173
We concatenate the word embedding e t , answer position embedding a t and lexical features embedding l t as input ( Then a bidirectional LSTM is used to produce a sequence of hidden states (h 1 , ..., h T ).
assuming m+1, ..., m+a is the index of given answer span in the input sentence, based on the corresponding featurerich hidden states (h m+1 , ..., h m+a ), we calculate the question type hidden states as follow: where l m+j is the corresponding lexical features.
neutral
train_92174
To obtain a combined training objective of two tasks, we add the loss of question type prediction into the loss of question generation to form a total loss function: 3 Experiment Dataset Following the previous works, we conduct experiments on two datasets, SQuAD and MARCO.
we reduce the question type vocabulary to only question words.
neutral
train_92175
Many innovative deep RL methods (Ranzato et al., 2016;Wu et al., 2016;Paulus et al., 2018;Lamb et al., 2016) are developed to alleviate this issue by providing sentence-level feedback after generating a complete sentence, in addition to optimal transport usage (Napoles et al., 2012).
(2018) do, because we want to examine the repetition of sentence directly produced after training.
neutral
train_92176
Sequence Matching task has attracted lots of attention in the past decades.
then we extend BERt with a MatchLStM layer to get further interaction of the sentence pair for sequence matching tasks.
neutral
train_92177
Our first approach relies on the usage of textbooks from specific domains of short answer grading.
these include our own textbooks and additional ones downloaded from Lumen Learning † and Gutenberg ‡ websites.
neutral
train_92178
Universal Language Model Fine-Tuning (ULMFiT) (Howard and Ruder, 2018) method is one of the first such initiatives to illustrate the effectiveness of language model fine-tuning.
our findings of the sensitivity of domain-specific BERT models appear generic.
neutral
train_92179
Compared with CB B , MNLI+CB B improves the overall performance of contradiction and the recall of neutral.
cB B and MNLI+cB B performances on the Yes-items are similar and both outperforming MNLI B (statistically significant improvements, McNemar's test, p < 0.01).
neutral
train_92180
To avoid such lists we used the following rules: (i) for English, sentences have to start with an uppercase letter and end with a period; (ii) for Nepali and Sinhala, sentences should not contain symbols such as bullet points, repeated dashes, repeated periods or ASCII characters.
we first use automatic methods to filter out poor translations and send those translations back for rework.
neutral
train_92181
Improving translation performance on low-resource language pairs could be very impactful considering that these languages are spoken by a large fraction of the world population.
we observed large variations in the level of translation quality, which motivated us to enact a series of automatic and manual checks to filter out poor translations.
neutral
train_92182
This work introduces conditional masked language models and a novel mask-predict decoding algorithm that leverages their parallelism to generate text in a constant number of decoding iterations.
mask-predict repeatedly masks out and repredicts the subset of words in the current translation that the model is least confident about, in contrast to recent parallel decoding translation approaches that repeatedly predict the entire sequence .
neutral
train_92183
In traditional left-to-right machine translation, where the target sequence is predicted token by token, it is natural to determine the length of the sequence dynamically by simply predicting a special EOS (end of sentence) token.
we find that our approach significantly outperforms prior parallel decoding machine translation methods and even approaches the performance of standard autoregressive models (Section 4.2), while decoding significantly faster (Section 4.3).
neutral
train_92184
Comparing with the co-occurrence relation, which subjects to contexts and topics of a document, word association test bears a more direct interaction with human brains.
we snapshot a sub-graph of G with P * in Figure 1 to show how stereotype propagates.
neutral
train_92185
If the score exceeds 3 the word "love" is considered as the seed positive sentiment phrase to begin the bootstrapping procedure a threshold (i.e., p > 0.5), the candidate phrase is added to the gazetteer.
moreover, it ends up generating only 20 different responses for our entire test dataset making its output redundant and unrelated to the input.
neutral
train_92186
Reference: you have to sacrifice your first born child for swimming lessons in nyc.
for training this module, the sarcasm corpus S is used.
neutral
train_92187
The learning algorithm minimizes negative log likelihood for optimizing parameters.
we also incorporate explicit means for reducing redundancy during decoding.
neutral
train_92188
Unlike the current implementation of SYPHON, these models include representational structures that enable them to capture certain non-local phenomena.
for illustration purposes, in all examples we will assume that our feature set has just three features: voice v, sonorant s, and continuant c. The input to this step is the matrix of surface forms X ij and the output is a set of aligned pairs U,X ij .
neutral
train_92189
Inference must be data-efficient: typically only a handful of data points are available.
this is a very computationally intensive problem.
neutral
train_92190
For example, in the following sentence, "In 2005 two fed- eral agencies, the US Geological Survey and the Fish and Wildlife Service, began to identify fish in the Potomac and tributaries ...", the model should use "federal agencies" to help classify "US Geological Survey" and "Fish and Wildlife Service" as government agency, but focus on "fish" and "tributaries" to determine that "Potomac" should be a body of water (river) instead of a city.
we also present a hybrid classification method beyond binary relevance to exploit type interdependency with latent type representation.
neutral
train_92191
There have indeed been such attempts, e.g., in clinical narratives Tourille et al., 2017) and in newswire (Cheng and Miyao, 2017;Meng and Rumshisky, 2018;Leeuwenberg and Moens, 2018).
we should move along that direction to collect more high-quality data, which can facilitate more advanced learning algorithms in the future.
neutral
train_92192
Note that the KB type representation is obtained from a knowledge base through entity linking and is independent of the context of the mention.
these datasets are more preferred by recent studies (Ren et al., 2016a;Murty et al., 2018).
neutral
train_92193
Muis and Lu (2016a).
the next baseline is a hypergraph-based method by Muis and Lu (2016a).
neutral
train_92194
Our model learns representations of these lower-arity relations using the universal schema framework, and predicts new n-ary facts by aggregating scores of lower-arity facts.
for the one derived from a textual relation, we use the following encoders to compute its representations.
neutral
train_92195
For example, in Figure 1 all subsequences of the sentence, such as "George Washington", will first be encoded, * Xianpei Han is the corresponding author.
to better decouple name and context knowledge for incorporating gazetteer information, we first design a new region-based attentive neural network (ANN), which introduces attention mechanism (Bahdanau et al., 2014;Vaswani et al., 2017;Su et al., 2018) to explicitly model the association between mentions and contexts.
neutral
train_92196
To create a corpus of candidate phrases and manually confirmed VAs, we use the dataset of Fischer and Jäschke (2019) as starting point and their method as baseline.
the popularity measure could easily be changed to any other measure like statements or using Wikipedia clickstream data.
neutral
train_92197
In this paper, we presented approaches for the first fully automated extraction of the stylistic device Vossian Antonomasia.
instead of relying only on one individual syntactic pattern, the EN-TITY of, we extended the candidate generation process with the patterns the ENTITY among, the EN-TITY for, and six variations thereof where the is pattern replaced by a or an, respectively.
neutral
train_92198
We can see that HanPaNE+P showed the highest accuracy and HanPaNE+P and VE+P, with consideration of paraphrases, showed a higher accuracy than Baseline.
hence, chemical NER has been studied to recognize chemical compounds from chemical * Taiki Watanabe belonged to Ehime University when this work was done.
neutral
train_92199
(wc) and LS T M (sc) are shared in NER and paraphrase generation.
hanPaNE-P and VE-P, without consideration of paraphrases, did not.
neutral