id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_17100
Overall, given an input question, the layout policy learns what sub-tasks to perform, and the neural modules learn how to perform each individual sub-task.
since the the modular layout is sampled from the controller, the controller itself is not end-to-end differentiable and has to be optimized using Reinforcement Learning Algorithms like Reinforce (Williams, 1992).
contrasting
train_17101
The CompTreeNN and CompTreeNTN, while better, are not able to solve the task perfectly either.
it should be noted that the CompTreeNN outperforms our four standard neural models by ≈30% and the CompTreeNTN improves on this by another ≈10%.
contrasting
train_17102
If one could determine such rationales, new capabilities would become possible, including explanation, identifying alternative event orderings, and answering "what if..." questions.
this task is challenging as it requires knowledge of the preconditions and effects of actions, typically unstated in the text itself.
contrasting
train_17103
On first sight, this might seem like a simple solution to the dependency prediction task.
as we show later, this approach does not produce good results.
contrasting
train_17104
This includes datasets that test a system's ability to extract factual answers from text (Rajpurkar et al., 2016;Nguyen et al., 2016;Trischler et al., 2016;Mostafazadeh et al., 2016;Su et al., 2016), as well as datasets that emphasize commonsense inference, such as entailment between sentences (Bowman et al., 2015;Williams et al., 2018).
there are growing concerns regarding the ability of NLU systems-and neural networks more generally-to generalize in a systematic and robust way (Bahdanau et al., 2019;Lake and Baroni, 2018;Johnson et al., 2017).
contrasting
train_17105
Our work is also related to the domain of question answering and reasoning in knowledge graphs (Das et al., 2018;Xiong et al., 2018;Hamilton et al., 2018;Xiong et al., 2017;Welbl et al., 2018;Kartsaklis et al., 2018), where either the model is provided with a knowledge graph to perform inference over or where the model must infer a knowledge graph from the text itself.
unlike previous benchmarks in this domain-which are generally transductive and focus on leveraging and extracting knowledge graphs as a source of background knowledge about a fixed set of entities-CLUTRR requires inductive logical reasoning, where every example requires reasoning over a new set of previously unseen entities.
contrasting
train_17106
We found that time-constrained AMT annotators performed well (i.e., > 70%) accuracy for k ≤ 3 but struggled with examples involving longer stories, achieving 40-50% accuracy for k > 3.
trained annotators with unlimited time were able to solve 100% of the examples (Appendix 1.7), highlighting the fact that this task requires attention and involved reasoning, even for humans.
contrasting
train_17107
Even though this content is not strictly useful or needed for the reasoning task, it may provide some linguistic cues (e.g., about entity genders) that the models exploit.
the BERT-based models do not benefit from the inclusion of this extra content, which is perhaps due to the fact that they are already built upon a strong language model (e.g., that already adequately captures entity genders.)
contrasting
train_17108
The accuracy of machine learned speech recognizers (Hinton et al., 2012) and speech synthesizers (van den Oord et al., 2016) are good enough to be deployed in real-world products and this progress has been driven by publicly available labeled datasets.
conspicuously absent from this list is equal progress in machine learned conversational natural language understanding (NLU) and generation (NLG).
contrasting
train_17109
Since the ultimate goal is to be able to handle complex human language behaviors, it would seem that human-human conversational data is the better choice for spoken dialog system development (Budzianowski et al., 2018).
learning from purely human-human based corpora presents challenges of its own.
contrasting
train_17110
MultiWOZ is an entirely written corpus and uses crowdsourced workers for both assistant and user roles.
taskmaster-1 has roughly 13,000 dialogs spanning six domains and annotated with API arguments.
contrasting
train_17111
The need for high-quality, large-scale, goaloriented dialogue datasets continues to grow as virtual assistants become increasingly widespread.
publicly available datasets useful for this area are limited either in their size, linguistic diversity, domain coverage, or annotation granularity.
contrasting
train_17112
This creates a disparate conversation: agents are incentivized to operate within a set procedure and convey a patient and professional tone.
customers do not have this incentive.
contrasting
train_17113
Commonly, datasets provide annotations at the turn level (Budzianowski et al., 2018;Asri et al., 2017;Mihail et al., 2017).
turn level annotations can introduce confusion for IC datasets, given multiple intents may be present in different sentences of a single turn.
contrasting
train_17114
(2018) builds a seq2seq neural network model for short texts to identify and recover ellipsis.
these methods are still limited to short texts or one-shot dialogues.
contrasting
train_17115
To handle this problem, memory networks are usually a great choice and a promising way.
existing memory networks do not perform well when leveraging heterogeneous information from different sources.
contrasting
train_17116
Their models mostly try to use the attention mechanism, including memory networks techniques, to fetch the most similar knowledge (Sukhbaatar et al., 2015), then incorporate grounding knowledge into a seq2seq neural model to generate a suitable re-sponse (Madotto et al., 2018).
existing memory networks equally treat information from multiple sources, e.g.
contrasting
train_17117
Comparing the generated sentences by humans, although entities and sentences are different with gold answer in example one, our approach is able to produce more fluent and accurate sentences.
the result on task weather forecasting neither HMNs and Mem2Seq can outperform SEQ2SEQ.
contrasting
train_17118
Due to their inherent capability in semantic alignment of aspects and their context words, attention mechanism and Convolutional Neural Networks (CNNs) are widely applied for aspect-based sentiment classification.
these models lack a mechanism to account for relevant syntactical constraints and long-range word dependencies, and hence may mistakenly recognize syntactically irrelevant contextual words as clues for judging aspect sentiment.
contrasting
train_17119
These words happen to exhibit similar patterns of how aspect words occur on sentence level (Lin and He, 2009).
the above methods, only exploiting global context, are arguably suboptimal for largely ignoring the rich information delivered by local context.
contrasting
train_17120
This is because the DMI can infer relational representations that capture some common latent relations between aspect and opinion words.
it still cannot completely capture the multi-word aspect terms (e.g.
contrasting
train_17121
Multi-task learning can learn better hidden states of the sentence, and better aspect embeddings.
it is still not good enough.
contrasting
train_17122
Table 1 shows examples of aspects and five of their corresponding seed words from our experimental datasets (described later in more detail).
to a classification label, which is only relevant for a single segment, a seed word can implicitly provide aspect supervision to potentially many segments.
contrasting
train_17123
CI: conversation interaction modeling.
its effect on conversation recommendation has not been explored yet, and our work aims to fill the gap.
contrasting
train_17124
We can observe that Reddit dataset contains more conversations, with a higher average number of conversations per user.
twitter conversations are longer, with fewer participants.
contrasting
train_17125
Recently, neural networks based on multitask learning have achieved promising performance on fake news detection, which focus on learning shared features among tasks as complementary features to serve different tasks.
in most of the existing approaches, the shared features are completely assigned to different tasks without selection, which may lead to some useless and even adverse features integrated into specific tasks.
contrasting
train_17126
private model to alleviate the shared and private latent feature spaces from interfering with each other.
these models transmit all shared features in the shared layer to related tasks without distillation, which disturb specific tasks due to some useless and even harmful shared features.
contrasting
train_17127
Our assumption is that the valence of a concept should correlate with its perceived moral polarity, e.g., morally repulsive ideas should evoke an unpleasant feeling.
we do not expect this correspondence to be perfect; for example, the concept of dessert evokes a pleasant reaction without being morally relevant.
contrasting
train_17128
Predictions for democracy show a trend toward morally positive sentiment, consistent with the adoption of democratic regimes in Western societies.
predictions for slavery trend down and suggest a drop around the 1860s, coinciding with the American Civil War.
contrasting
train_17129
Garcia et al., 1997;Cofield et al., 2010;Lazarus et al., 2015;Chiu et al., 2017).
such manual approaches often require an exhaustive list of linguistic cues, which costs a significant amount of human effort and its generalizability may be limited by the small sample size.
contrasting
train_17130
association, associated with, predictor, at high risk of The statement shows that one variable directly changes the other.
the relation carries an element of doubt in it, which is normally via hedges or modalities.
contrasting
train_17131
Because some languages are used in multiple countries and some countries did not publish many papers, not all results in Table 6 and Table 7 are comparable.
we were able to identify four country-language pairs that correspond well to each other: Germany-German, Italy-Italian, France-French, and China-Chinese.
contrasting
train_17132
In a parallel development, researchers have recently started to view fact checking as a task that can be partially automated, using machine learning and NLP to automatically predict the veracity of claims.
existing efforts either use small datasets consisting of naturally occurring claims (e.g.
contrasting
train_17133
Contributions: The latter category above is the most related to our paper as we consider evidence documents.
existing models are not trained jointly for evidence identification, or for stance and veracity prediction, but rather employ a pipeline approach.
contrasting
train_17134
Further, Node2Vec (Grover and Leskovec, 2016) proposed a biased random walk to search a network and generate node sequences based on the depth-first search and width-first search.
those methods only embed the structure information into vector representations, while ignoring the informative textual contents associated with vertices.
contrasting
train_17135
Finally, we use N (v i ) to refer to v i 's neighbours, i.e., all nodes which are directly connected to v i .
1 to most existing models, where user representations are static, our model uses an encoder which takes as input a pre-computed user vector, performs a dynamic exploration of its neighbours in the social graph, and updates the user representation given the relevance of its connections for a target task.
contrasting
train_17136
Our work is motivated by the goal of assisting human moderators of online communities by preemptively signaling at-risk conversations that might deserve their attention.
we caution that any automated systems might encode or even amplify the biases existing in the training data (Park et al., 2018;Sap et al., 2019;Wiegand et al., 2019), so a public-facing implementation would need to be exhaustively scrutinized for such biases (Feldman et al., 2015).
contrasting
train_17137
As expected, the Seq2Seq model achieves higher scores with filtered conversation as input.
this is not the case for the VAE model.
contrasting
train_17138
As shown in Table 3, applying Reinforcement Learning does not lead to higher scores on the three automatic metrics.
human evaluation (Table 4) shows that the RL model creates responses that are potentially better at mitigating hate speech and are more diverse, which is consistent with .
contrasting
train_17139
for all the testing Gab data can achieve 4.2 on BLEU, 20.4 on ROUGE, and 18.2 on METEOR.
this response is not considered as high-quality because it is almost a universal response to all the hate speech, regardless of the context and topic.
contrasting
train_17140
The sampled results in Figure 4 show that the Seq2Seq and the RL model can generate reasonable responses for intervention.
as is to be expected with machine-generated text, in the other human evaluation we conducted, where Mechanical Turk workers were also presented with sampled human-written responses alongside the machine generated responses, the human-written responses were chosen as the most effective and diverse option a majority of the time (70% or more) for both datasets.
contrasting
train_17141
(2018) defined this term when they found that the leaveone(token)-out (LOO) method might find a completely different set of most influential words if the most un-influential word is dropped from the sentence.
we do not observe this problem in our case: out of 419 predictions, 379 times the most influential word remains the most influential and 33 times it becomes the second most influential one after we drop the most un-influential words.
contrasting
train_17142
2) we cannot guarantee that the resulting text is still a natural input.
these challenges do not play much of a role in our domain.
contrasting
train_17143
This post fits the aggression label, by challenging authority and insulting, and the loss label, by evoking incarceration.
a loss label is more suitable, because: 1) though the anger seems to target specific players, the text as a whole reveals that the poster is highlighting institutional violence; and 2) the sad frown emoji, paired with the context that one of the poster's loved ones was recently arrested on a drug charge ("12z" refers to law enforcement's drug unit), cements that the user is grieving in this post.
contrasting
train_17144
It is convenient for computer scientists to regard the labels as ground truth, the dataset as static and try different models to improve the f-score.
as domain experts gained a deeper understanding about the twitter users, they marked 12% of the aggression labels as controversial (section 3); the annotation may evolve over time.
contrasting
train_17145
We are also grateful to our anonymous reviewers for their insightful and constructive feedback.
to many decades of research on oral code-switching, the study of written multilingual productions has only recently enjoyed a surge of interest.
contrasting
train_17146
Recent research from the sociolinguistic perspective has considered CS in an array of online genres: e.g., English-Jamaican Creole in email (Hinrichs, 2006), Malay-English language in online discussion forums (McLellan, 2005), English-Spanish bilingual blogging (Montes-Alcalá, 2007), and English-Chinese instant messaging (Lam, 2009).
these studies have analyzed productions of a small number of authors (even a single person), and each is restricted to a single language-pair.
contrasting
train_17147
A recently proposed zero-shot intent classification method, IntentCapsNet, has been shown to achieve state-of-the-art performance.
it has two unaddressed limitations: (1) it cannot deal with polysemy when extracting semantic capsules; (2) it hardly recognizes the utterances of unseen intents in the generalized zero-shot intent classification setting.
contrasting
train_17148
To improve business effectiveness and user satisfaction, accurately identifying the intents behind user ut-terances is indispensable.
it is extremely challenging not only because user queries are sometimes short and expressed diversely, but also because it may continuously encounter new or unacquainted intents popped up quickly from various domains.
contrasting
train_17149
(2018) extend capsule networks for zero-shot intent classification by transferring the prediction vectors from seen classes to unseen classes.
there are some key issues left to be resolved, including how to deal with polysemy in word embeddings and how to improve the model generalization ability to unseen intents in the generalized zero-shot intent classification setting.
contrasting
train_17150
A simple transfer strategy is to share the whole matrix in both target and source domains.
such a method will make it less flexible to capture domain-specific match information, since the local match information is likely to be varied across domains.
contrasting
train_17151
With the development of NLP technologies, news can be automatically categorized and labeled according to a variety of characteristics, at the same time be represented as low dimensional embeddings.
it lacks a systematic approach that effectively integrates the inherited features and inter-textual knowledge of news to represent the collective information with a dense vector.
contrasting
train_17152
Nonetheless, News2vec achieves state-of-art performance after sequence fine-tuning.
the result of Paragraph Vector cannot be further improved by fine-tuning.
contrasting
train_17153
Building on this line of work, Zhang and Lapata (2017) combine a novel sequence-to-sequence encoder-decoder model with a deep reinforcement learning framework that rewards the system for providing simple, fluent output, similar in meaning to the input.
one of the main challenges for such comprehensive end-to-end systems is the ability to address specific types of errors independently.
contrasting
train_17154
To further assess generalisability of the model, we test it on CEFR-LS, as well as BENCHLS for consistency (see Table 2).
as the words in BENCHLS dataset were selected at random rather than according to their complexity (Paetzold and Specia, 2016a), the results as expected are lower.
contrasting
train_17155
Previous work has framed this task as ranking words according to simplicity (Paetzold and Specia, 2016a).
a system that is only aimed at selecting the simplest candidate may return a substitution that does not fit the surrounding context, is ungrammatical or not semantically equivalent to the original.
contrasting
train_17156
Using the MedLit data in the RNN provides a significant improvement (65% vs 64%).
there is no differ- ence in the BERT models, likely because of the a large amount of i2b2 training data.
contrasting
train_17157
We found that running the MedLit + TR i2b2 experiment with just 25% of the i2b2 dataset with the RNN model performs as well (no significant difference) as training on the full i2b2 dataset (Figure 4), and 12% better than the model using 25% of the i2b2 data alone.
when using the BERT model, MedLit + TR i2b2 only performs better than i2b2 alone if less than 25% of the training data is available.
contrasting
train_17158
These methods rely on the news click histories to model users' news reading interest.
these methods usually suffer from the data sparsity problem, since the news click behaviors of many users on news platforms are very limited, making it difficult for these methods to learn accurate representations of these users.
contrasting
train_17159
1, since the user posts the search query "toyota crown", and browses the webpage of "Pros and Cons of Toyota" for detailed information, she may have high interest in Toyota cars and is likely to read news articles related to Toyota cars.
this user may click only a few news articles, and her interest in toyota cars cannot be mined from the news click history.
contrasting
train_17160
(2014) proposed a Top-icMF method to represent users and items by incorporate the topics extracted from review texts to enhance the learning of latent user and item factors from the rating matrix via non-negative matrix factorization (NMF).
these methods only extract topics from reviews, and cannot effectively utilize the contexts and word orders in reviews, both of which are important for learning accurate user and item representations.
contrasting
train_17161
In this bipartite graph, users and items are represented as nodes, and the interactions of user-item pairs are regarded as edges.
different from typical graph neural networks (Veličković et al., 2017), the input of typical review-based recommender systems is only a user-item pair (two nodes with an edge) rather than the entire user-item graph.
contrasting
train_17162
The models capture multiplicative interactions between these elements and are thus able to make large shifts in event semantics with only small changes to the arguments.
previous work mainly focuses on the nature of the event and lose sight of external commonsense knowledge, such as the intent and sentiment of event participants.
contrasting
train_17163
Spelling correction (Mays et al., 1991;Islam and Inkpen, 2009) and grammar error correction (Sakaguchi et al., 2017) are useful tools which can block editorial adversarial attacks, such as swap and insertion.
they cannot handle cases where word-level attacks that do not cause spelling and grammar errors.
contrasting
train_17164
These models are node-based, i.e., they form pair representations based solely on the two target node representations.
entity relations can be better expressed through unique edge representations formed as paths between nodes.
contrasting
train_17165
They typically perform on the nodes by updating the representations during training.
a relation between two entities depends on different contexts.
contrasting
train_17166
Among three semi-supervised models, CPLS model achieves the greatest improvement, verifying the effectiveness of the projection functions.
the gain of CPLS model in the aspect of style accuracy is not that significant.
contrasting
train_17167
This example demonstrates that user comments closely relate to the intent of an edit, the actual edit operation, and the content and location in the document that underwent change.
during collaborative document authoring, the tracking and maintenance of comments becomes increasingly more challenging due to the large number of edits and comments that authors make.
contrasting
train_17168
(2010) consider both comment and document revision for lexical simplification.
they use comments as meta-data to identify trusted revisions, rather than directly modeling the relationship between comments and edits.
contrasting
train_17169
The aforementioned prior work mostly focuses on building the best neural network model independent of any model size or memory constrains.
recent work by Kozareva, 2018, 2019) show the importance of building ondevice text classification models that can preserve user privacy, provide consistent user experience and most importantly are compact in size, while yet achieving state-of-art results.
contrasting
train_17170
The BPE algorithm is deterministic because it segments greedily from left to right.
sR can sample multiple segmentations stochastically.
contrasting
train_17171
We can extract a set of top B(≥ N ) candidates using beam search with beam size B, namely in the descending order of likelihood.
in the case of deterministic segmentation, in the case of stochastic segmentation, same query q i , q j (i = j) with different token sequence t i , t j may exist.
contrasting
train_17172
By following (Kudo, 2018), we can find the most likely segmentation sequence t starting from all of the n-best segmentations t 1 , • • • ,t n of S(p) rather than from onlyt 1 .
we observe that this n-best decoding performs worse than one-best decoding.
contrasting
train_17173
On the other hand, BPE is poor when the query length (or prefix length) is short.
for a longer case, its MRR is almost close to that of the character baseline.
contrasting
train_17174
Because of controllability motivations, these axes and labels are generally known a priori.
counterfactual rewriting focuses on the causes and effects of a story, dimensions that can require more complex and diverse, yet potentially subtle, changes to accommodate the counterfactual event.
contrasting
train_17175
Neural question generation (NQG) has been extensively studied because of its broad application in question answering (QA) systems (Tang et al., 2018;Duan et al., 2017), reading comprehension (Kim et al., 2019;, and visual question answering (Fan et al., 2018).
the research of open-domain conversational question generation (CQG) Hu et al., 2018) is still in its infancy.
contrasting
train_17176
News titles are short and succinct, and thus only using news titles may lose quite a lot of useful information in comment generation.
a news article and a comment is not a pair of parallel text.
contrasting
train_17177
On the other hands, VAE-based models with the input of sequences usually adopt RNNs as encoders or decoders.
latent variable collapse (i.e., the latent variable collapses to obey the distribution of the prior) makes the sequential VAE to generate texts without information of latent code so that the latent code cannot have enough information of text features.
contrasting
train_17178
Taking TGVAE as an example, it guides the generation of the VAE latent code by topic distribution.
it does not separate the semantic and structure latent codes explicitly.
contrasting
train_17179
Similar to existing topic modeling methods, we treat bag-of-words representations of texts as input.
since the encoder in the topic modeling component is expected to be as a discriminator and guarantee that texts generated by the sequence modeling decoder have specific topic information, the encoder cannot be with the discrete representations (e.g., one-hot representation) of texts as input.
contrasting
train_17180
This is a denoising preprocessing step that makes the model more reliable.
to make the execution of the discriminator applicable, our model takes all of the words in the document as input while outputting the words in a smaller vocabulary as a topic model does.
contrasting
train_17181
In this paper, we formulate phrase grounding as a sequence labeling task where we treat candidate regions as potential labels, and use neural chain Conditional Random Fields (CRFs) to model dependencies among regions for adjacent mentions.
to standard sequence labeling tasks, the phrase grounding task is defined such that there may be multiple correct candidate regions.
contrasting
train_17182
The success of these structured prediction methods shows the advantage of considering entity dependencies in learning and prediction.
these approaches capture certain relations in an ad hoc manner, and resort to approximate inference (Wang et al., 2016; or non-differentiable losses .
contrasting
train_17183
In recent years, video semantic comprehension has attracted much research attention, with a number of datasets and tasks being proposed such as activity recognition (Xu et al., 2017), dense video captioning (Krishna et al., 2017a), visual question answering (Lei et al., 2018(Lei et al., , 2019 etc.
most works are limited to only capturing coarse semantic information such as action recognition in broad categories.
contrasting
train_17184
All currently published models for addressing this problem can be categorized into two types: (i) top-down approach: it does classification and regression for a set of pre-cut video segment candidates; (ii) bottom-up approach: it directly predicts probabilities for each video frame as the temporal boundaries (i.e., start and end time point).
both two approaches suffer several limitations: the former is computationintensive for densely placed candidates, while the latter has trailed the performance of the top-down counterpart thus far.
contrasting
train_17185
These models suffer the same drawbacks as above mentioned in the top-down approach for NLVL.
with the advent of the first performance comparable bottom-up object detector: CornerNet (Law and Deng, 2018), the bottom-up approaches begin to gain unprecedented attention (Zhou et al., 2019b,a;Duan et al., 2019;, which inspires us to explore a decent bottom-up framework for NLVL.
contrasting
train_17186
In principle, a larger t usually gives us a more stable mistake estimation.
a larger t also requires more computation resources.
contrasting
train_17187
And (2) "Team A at Team B" is a way to say "Team A" as an away team playing with Team B as a home team.
in almost all cases (only 1 exception out of more than 100), "Team A" was labelled as ORG while "Team B" was labelled as LOC.
contrasting
train_17188
(2014) further applied previous detection models and manually corrected Icelandic Frequency Dictionary (Pind et al., 1991) POS tagging dataset.
these two methods are specifically developed for POS tagging and cannot be directly applied to NER.
contrasting
train_17189
Most state-of-the-art models for named entity recognition (NER) rely on the availability of large amounts of labeled data, making them challenging to extend to new, lowerresourced languages.
there are now several proposed approaches involving either cross-lingual transfer learning, which learns from other highly resourced languages, or active learning, which efficiently selects effective training data based on model predictions.
contrasting
train_17190
The variations on the second and third keyphrases are larger.
we found the variations are more about which keyphrases they choose to enter, not about whether a phrase is a keyphrase or not.
contrasting
train_17191
Knowledge graphs are structured representations of real world facts.
they typically contain only a small subset of all possible facts.
contrasting
train_17192
DistMult (Yang et al., 2015) is a special case of RESCAL with a diagonal matrix per relation, which reduces overfitting.
the linear transformation performed on entity embedding vectors in DistMult is limited to a stretch.
contrasting
train_17193
First, the chosen input texts must be classified into the same class by both models so the humans make decisions based only on the different explanations.
it is worth to consider both the cases where both models correctly classify and where they misclassify.
contrasting
train_17194
If they select the right model with high confidence, the explanation method will get a higher positive score.
a confident but incorrect answer results in a large negative score.
contrasting
train_17195
Each of the thresholds is set subject to sufficient purity of the classification results above it.
their method is applicable to CNNs with only one linear layer as F C, while our CNNs have an additional hidden layer (with ReLU activation).
contrasting
train_17196
Hence, they can help humans identify the poor model to some extent.
there was no clear winner when both CNNs predicted wrongly.
contrasting
train_17197
So, the evidence n-grams are more likely related to the predicted class than the other classes.
they may be less relevant than LIME's as the evidence is generated from a global explanation of the model (DTs).
contrasting
train_17198
First, LRP (N) generates good evidence for correct predictions, so it can gain high scores in the columns.
the evidence for incorrect predictions () is usually not convincing, so the counterevidence (which is likely to be the evidence of the correct class) can attract humans' attention.
contrasting
train_17199
(2012) proposed the BioNERDS, which is a NER system for detecting database and software names in scientific literature using extraction rules generated by examining 30 bioinformatics articles.
there has been no existing framework for modeling the resource role and function at such a fine-grained level in general domain scientific text.
contrasting