id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_3400
They are rank 4 tensors of size ≈10 15 .
we already discussed that they are very sparse, for two reasons: (i) We make the assumption that there is no interaction between dimensions.
contrasting
train_3401
The learning objective becomes: The hidden layer S of the autoencoder gives us synset embeddings.
the lexeme embeddings are defined when transitioning from W to S, or more explicitly by: there is also a second lexeme embedding in AutoExtend when transitioning form S to W : Aligning these two representations seems natural, so we impose the following lexeme constraints: this can also be expressed dimension-wise.
contrasting
train_3402
Hassan and Mihalcea (2009) built two sets of cross-lingual datasets by translating the English MC-30 (Miller and Charles, 1991) and the WordSim-353 (Finkelstein et al., 2002) datasets into three languages.
these datasets have several issues due to their construction procedure.
contrasting
train_3403
k = 3), i.e., the same number of possible senses.
the number of senses in WordNet (Miller, 1995) varies from 1 such as "ben" to 75 such as "break".
contrasting
train_3404
(2) The initial value of sense representation is critical for most statistical clustering based approaches.
previous approaches usually adopted random initialization (Neelakantan et al., 2014) or the mean average of candidate words in a gloss .
contrasting
train_3405
One possible reason is that we set the number of context clusters for each word to be the same as the number of its corresponding senses in WordNet.
not all senses appear in the our experimented corpus which could lead to fragmented context clustering results.
contrasting
train_3406
(2015), that implement very similar ideas.
one major difference between their work and ours is that their strategy is in the same direction of (Yih et al., 2012), which might result in poor performance on general semantic tasks.
contrasting
train_3407
the input vector, not unlike what arithmetic negation would do.
the Skip-grambased not matrix is remarkably identity-like, with large positive values concentrated on the diagonal.
contrasting
train_3408
Specifically, each participant in the dialogue usually has specific sentiment polarities towards different topics.
most existing sequential data modeling methods are not capable of incorporating the information from both the topic and the author's identity.
contrasting
train_3409
Comparing to traditional HMM-based method, it explores deeply into the structure of sentences, and is more flexible in taking external features into account.
it doesn't suffer from the training difficulties of recurrent neural networks, such as the vanishing gradient problem.
contrasting
train_3410
In particular, we assume that there is a labeling matrix L such that and a transition matrix T such that These two equations establish the relation between the hidden state and the labels.
we use a neural network model M to model the relation between the hidden states and the observations.
contrasting
train_3411
One naive approach is to use aggregated word vectors across a document (e.g., a document's average word-vector location) as input to a standard classifier (e.g., logistic regression).
a document is actually an ordered path of locations through R K , and simple averaging destroys much of the available information.
contrasting
train_3412
Indeed, we find that all methods are often outperformed by phrase-count logistic regression with rare-feature up-weighting and carefully chosen regularization.
the out-of-the-box performance of Word2Vec inversion argues for its consideration as a simple default in document classification.
contrasting
train_3413
Choosing the most frequent property p may lead to descriptions that closely resemble those observed in the data.
we predict that the availability of a highly discriminatory property may change this preference.
contrasting
train_3414
By augmenting a labeled dataset with unlabeled data, we can use a bootstrapping framework to improve predictive accuracy, and reduce the need for labeled data-which could make it much easier to port discourse parsing algorithms to new domains.
a fully unsupervised parser may not be desirable because in many applications specific discourse relations must be identified, which would be difficult to achieve without the use of labeled examples.
contrasting
train_3415
In contrast, the MRNN approach tends to generate such anaphoric relationships correctly.
the D-ME LM maintains an explicit coverage state vector tracking which attributes have already been emitted.
contrasting
train_3416
The human evaluation agrees with the CAR slightly better than the automatic metrics.
the agreement rates are still less than 0.7 for all pairs of compared systems.
contrasting
train_3417
In this scenario, the metric BLEU assigns the same score of 0.489 for these two translations.
the representation based metric associates hypothesis 2 with a much higher score than that of hypothesis 1, namely 0.865 and 0.555, respectively.
contrasting
train_3418
One example is the English word "workflow", which appears in the French post-editions both as is (21 sentences) and translated into "flux de travail" (34 sentences).
in the other language directions all the occurrences of 'workflow" are either translated or kept in English.
contrasting
train_3419
The probability for the document V is given by: where F θ (V ) is the "free energy", which can be analytically integrated easily, and Z D is the "partition function" for normalization, only associated with the document length D. As the hidden state and document are conditionally independent, the conditional distributions are derived: where 3is the softmax units describing the multinomial distribution of the words, and Equation (4) serves as an efficient inference from words to semantic meanings, where we adopt the probabilities of each hidden unit "activated" as the topic features.
rSM is naturally learned by minimizing the negative log-likelihood function (ML) as follows: the gradient is intractable for the combinatorial normalization term Z D .
contrasting
train_3420
Although both CD and α-NCE run slower when the input dimension increases, CD tends to take much more time due to the multinomial sampling at each iteration, especially when more Gibbs steps are used.
running time stays reasonable in α-NCE even if a larger noise size or a larger dimension is applied.
contrasting
train_3421
(2011)'s "full" model 87.44% WRRBM (Dahl et al., 2012) 87.42% RSM:CD 86.22% RSM:α-NCE-5 87.09% RSM:α-NCE-5 (idf) 87.81% Table 2: The performance of sentiment classification accuracy on the IMDB dataset using RSMs compared to other BoW-based approaches.
table 2 shows the performance of RSM in sentiment classification, where model combinations reported in previous efforts are not considered.
contrasting
train_3422
Specifically, the regularizer defines a penalty if the source classifier and the target classifier make different predictions on an unlabeled target instance.
with this regularizer, EA++ does not strictly restrict either the source classifier or the target classifier to lie in the target subspace X t .
contrasting
train_3423
However, with this regularizer, EA++ does not strictly restrict either the source classifier or the target classifier to lie in the target subspace X t .
as we have pointed out above, when only the induced features are used, our method leverages the unlabeled target instances to force the learned classifier to lie in X t .
contrasting
train_3424
Therefore x i,j refers to concatenated word vector from the i-th word to the (i + j)-th word: Sequential word concatenation x i,j works as n-gram models which feeds local information into convolution operations.
this setting can not capture long-distance relationships unless we enlarge the window indefinitely which would inevitably cause the data sparsity problem.
contrasting
train_3425
The errors in parse trees inevitably affect the classification accuracy.
the parser works substantially better on the TREC dataset since all questions are in formal written English, and the training set for Stanford parser 5 already includes the QuestionBank (Judge et al., 2006) which includes 2,000 TREC sentences.
contrasting
train_3426
Given their large numbers of parameters, often in the millions, one would expect that such models can only be effectively learned on very large datasets.
we show here that a complex deep convolution network can be trained on about a thousand training examples, although careful model design and regularisation is paramount.
contrasting
train_3427
(2010) preprocessed the text by stemming, down- casing, and discarding feature instances that occurred in fewer than five reviews.
we did not perform any processing of the text or feature engineering, apart from tokenization, instead learning this automatically.
contrasting
train_3428
This suggests that the tendency by certain sites to review specific movies is in itself indicative of the revenue.
this improvement is more difficult to discern with the ANN Text+Meta+Domain model, possibly due to redundancy with the meta data.
contrasting
train_3429
sampling (k) 5 -# of iterations (T ) 5 20 # of threads 56 # of dimensions (D) 300 Table 3: Hyper-parameters in our experiments.
in particular, the original SGNS uses β(c i,j ) = 1 for all (i, j), and logistic loss function: GloVe uses a least squared loss function: Table 2 lists the factors of each configuration used differently in SGNS and GloVe.
contrasting
train_3430
Note that HUCRF does not always perform better than CRF when initialized randomly.
hUCRF consistently outperforms CRF with the pre-training methods proposed in this work.
contrasting
train_3431
It might seem that a low similarity score between two retellings simply indicates that one retelling includes fewer story elements.
given the equivalent number of story elements recalled by the two groups, we can assume that a low similarity score indicates a difference in the quality rather than the quantity of information in the retellings.
contrasting
train_3432
2 Due to the nature of the olfactory data source (see Section 3), it is not possible to build olfactory representations for all concepts in the test sets.
cross-modal mappings yield an additional benefit: since linguistic representations have full coverage over the datasets, we can project from linguistic space to perceptual space to also obtain full coverage for the perceptual modalities.
contrasting
train_3433
Delexicalized transfer yields worse results than a supervised lexicalized parser trained on a target language treebank.
for languages with no treebanks available, it may be useful to obtain at least a lower-quality parse tree for tasks such as information retrieval.
contrasting
train_3434
Traditionally, transition-based parsers were trained to follow a so-called static oracle, which is only defined on the configurations of a canonical computation that generates the gold tree, returning the next transition in said computation.
dynamic oracles are non-deterministic (not limited to one sequence, but supporting all the possible computations leading to the gold tree), and complete (also defined for configurations where the gold tree is unreachable, choosing the transition(s) that lead to a tree with minimum error).
contrasting
train_3435
Thus, the configuration has only one individually unreachable arc (0→2), but its loss is 2.
it is worth noting that non-arcdecomposability in the parser is exclusively due to cycles.
contrasting
train_3436
Our proposed method achieved the state-of-the-art F-score and OOV recall on two common corpus PKU and MSR.
note that we only exploited the flat segmented results of internal word structure here.
contrasting
train_3437
While it has been observed repeatedly that using multiple source languages improves performance Fossum and Abney, 2005;, most available techniques work best for closely related languages.
this paper presents an effort to learn POS taggers for truly low-resource languages, with minimum assumptions about the available language resources.
contrasting
train_3438
Distant supervision is a widely applied approach to automatic training of relation extraction systems and has the advantage that it can generate large amounts of labelled data with minimal effort.
this data may contain errors and consequently systems trained using distant supervision tend not to perform as well as those based on manually labelled data.
contrasting
train_3439
The main advantage is its ability to automatically generate large amounts of training data automatically.
this automatically labelled data is noisy and usually generates lower performance than approaches trained using manually labelled data.
contrasting
train_3440
The overall ratio of positive to negative sentences in this dataset was 1:5.1.
this changes to 1:2.3 after removing examples identified by PRA.
contrasting
train_3441
Most Open IE systems employ syntactic information such as parse trees and part of speech (POS) tags, but ignore lexical information.
previous work suggests that Open IE would benefit from lexical information because the same syntactic structure may correspond to different relations.
contrasting
train_3442
(2013) is not helpful, which we attribute to data sparsity.
smoothed word representations do improve F-measure.
contrasting
train_3443
As we analyze, shortest dependency paths and subtrees play different roles in relation classification.
we can see that DT-RNN does not distinguish the modeling processes of shortest paths and subtrees.
contrasting
train_3444
Unlike in the news domain, in the biomedical domain it is rare for the same word or phrase to refer to multiple different concepts.
different words or phrases often refer to the same concept.
contrasting
train_3445
Finally, this work challenges prior claims that spoken language is "more complex" than other genres with regards to referentiality.
: whereas in a spoken discourse the potential addressees are by default the participants, web texts such as the reviews studied here have no such default, and may include complex, creative, and domain-specific deictic reference that can be important for computational systems to address.
contrasting
train_3446
Decisions relating to relevance of material to a given topic (MO question) are delegated to experts on the website.
the information seeker (MO user posting the question) remains the ultimate judge of relevance.
contrasting
train_3447
Understanding the search intents of queries is essential for satisfying users' information needs and is very important for many search tasks such as personalized search, query suggestion, and search result presentation.
it is not a trivial task because the underlying intents of the same query may be different for different users.
contrasting
train_3448
Clearly, it is very challenging to learn a CTH from multiple topic hierarchies in different articles due to the following 3 reasons: 1) A topic can be denoted by a variety of tags in different articles (e.g., "foreign aids" and "aids from other countries"); 2) Structural/hierarchical information can be inconsistent (or even opposite) across different articles (e.g., "response subtopicOf aftermath" and "aftermath subtopicOf response" in different earthquake event articles); 3) Intuitively, text descriptions of the topics in Wiki articles are supposed to be able to help determine subtopic relations between topics.
how can we model the textual correlation?
contrasting
train_3449
On the one hand, the results of a corrected run of the WHUNLP system provided by the SemEval organizers.
the results of an out of the competition version of the SPINOZAVU team explained in (Caselli et al., 2015).
contrasting
train_3450
As future work, we plan to explore in more detail this research line by applying more sophisticated approaches in the temporal analysis at document level.
this is not the only research line that we want to go in depth.
contrasting
train_3451
To maintain comparability, we use the ACE-2005 documents with the same split as in (Ji and Grishman, 2008;Liao and Grishman, 2010b;Li et al., 2013) to 40 test documents and 559 training documents.
some evaluation settings differ: Li et al.
contrasting
train_3452
Additionally, name matching has been used as a component in cross language entity linking (McNamee et al., 2011a;McNamee et al., 2011b) and cross lingual entity clustering (Green et al., 2012).
little work has focused on logograms, with the exception of Cheng et al.
contrasting
train_3453
String Matching We consider two common string matching algorithms: Levenshtein and Jaro-Winkler.
because of the issues mentioned above we expect these to perform poorly when applied to Chinese strings.
contrasting
train_3454
"azul" in ES) and have a high similarity score in our proposed BC mapping, are correctly tagged.
ooV Brand have a very large prediction error rate due to the small training data.
contrasting
train_3455
This method for automatic paraphrasing has been discussed previously by Rastogi and Van Durme (2014).
whereas their work only discussed the idea as a hypothetical way of augmenting FN, we apply the method, vet the results, and release it as a public resource.
contrasting
train_3456
Third, we use a simple systematic process to ensure that the constructed data is enriched with "related" pairs, beyond what one would expect to obtain by random sampling.
to previous work, our enrichment process does not rely on a particular relatedness algorithm or resource such as Wordnet (Fellbaum, 1998), hence the constructed data is less biased in favor of a specific method.
contrasting
train_3457
For examples, datasets constructed to assess lexical entailment (Mirkin et al., 2009) and lexical substitution (McCarthy and Navigli, 2009;Kremer et al., 2014;Biemann, 2013) methods.
the focus of the current work is on the more general notion of term-relatedness, which seems to go beyond these more concrete relations.
contrasting
train_3458
In addition, we note that a correlation measure gives equal weight to all pairs in the dataset.
in some NLP applications it is more important to properly distinguish related pairs from unrelated ones.
contrasting
train_3459
In some tasks, such as statistical machine translation (Kondrak et al., 2003) and sentence alignment, or when studying the similarity or intelligibility of the languages, cognates are seen as words that have similar spelling and meaning, their etymology being completely disregarded.
in problems of language classification, distinguishing cognates from borrowings is essential.
contrasting
train_3460
Note that the dev set is not used in the experiments of this paper, since ∆BLEU and IBM BLEU are metrics that do not require training.
the dev set is released along with a test set in the dataset release accompanying this paper.
contrasting
train_3461
This is expected, since BLEU treats all references as equal and has no way of discriminating between them.
correlation coefficients increase for ∆BLEU after adding lower scoring references.
contrasting
train_3462
Point-wise MI is often used to find interesting bigrams (collocations).
mI is actually better to think of it as a measure of independence than of dependence (manning et al., 1999).
contrasting
train_3463
LR is one of the most stable methods for automatic term extraction so far, and more appropriate for sparse data than other metrics.
lR is still biased to two frequent words that are rarely adjacent, such as the pair (the, the) (Pantel et al., 2001).
contrasting
train_3464
Previous research and work in Tibetan word segmentation have made great progresses.
cases with unknown words are not satisfactory.
contrasting
train_3465
(2012) designed and implemented a Tibetan word segmentation system named "SegT" which is lexicon-based practical system with a constant lexicon.
it has the difficulty of identifying unknown words in newspaper articles and web documents which are highly changeable texts with time.
contrasting
train_3466
In comparison, the advantage of the syntactic information is not at all obvious when they are used in a non-factorized fashion in the DEP model; it out-performs the window-based methods (below the dashed line) on only 3 datasets with limited margins.
the window-based methods consistently outperform the dependencybased methods on the FG dataset, confirming our intuition that window-based methods are better at capturing relatedness than similarity.
contrasting
train_3467
This is a favorable situation for implementing a bootstrapping approach in which the results for "good" entries are exploited for improving the results of the other ones.
such idea faces two problems: first, detecting "good" entries; second, learning a model from them for improving the performance of the other entries.
contrasting
train_3468
It is possible that this introduces a confounding factor.
since we do not see marked effects for gender or region, and since the English results closely track the German data, this seems unlikely.
contrasting
train_3469
Probabilistic topic model such as Latent Dirichlet Allocation(LDA) (Blei et al, 2003) has been widely used for discovering latent topics from document collections by capturing words' cooccuring relation.
the "bag of words" assumption is employed in most existing topic models, it assumes the order of words can be ignored and topic assignment of each word is conditionally independent given the topic mixture of a document.
contrasting
train_3470
(2012) developed a spatio-temporal model of conflict events in Afghanistan.
here we deal with temporal text data, and model several correlated outputs rather than their single output.
contrasting
train_3471
In terms of performing distributed representation learning for output variables, our proposed model shares similarity with the structured output representation learning approach developed by Srikumar and Manning (2014), which extends the structured support vector machines to simultaneously learn the prediction model and the distributed representations of the output labels.
the approach in (Srikumar and Manning, 2014) assumes the training labels (i.e., output values) are given and performs learning in the standard supervised in-domain setting, while our proposed distributed HMMs address cross-domain learning problems by performing unsupervised representation learning.
contrasting
train_3472
(2015) and Zhang (2015) have proposed independently to summary source sentences with convolutional neural networks.
they both extend the neural network joint model (NNJM) of Devlin et al.
contrasting
train_3473
From a higher level view, EM-UNRAVEL can be seen as a specialized word based MT decoder that can efficiently generate and organize all possible translations in the E-step, and efficiently retrain the model {p(f |e)} on all these hypotheses in the M-step.
to DET-UNRAVEL, EM-UNRAVEL processes the input corpus sentence by sentence.
contrasting
train_3474
Similarly to DET-UNRAVEL, the previously described expansion and pruning step is implemented using two arrays H s and H t .
in EM-UNRAVEL the partial hypotheses in H s and H t use the same data structures since-in contrast to DET-UNRAVEL-recombination of hypotheses is possible.
contrasting
train_3475
Although the 1,000-best oracle remains at the same level over the iterations, the 1,000best average score 5 increases by 2 BLEU at the last iteration over the first 1,000-best hypotheses produced by Moses, pointing out a strong improvement of the average quality of the 1,000best hypotheses.
except for the IN configuration on medical En→Fr, multi-pass Moses does not bring improvements by itself over the Rerank baseline.
contrasting
train_3476
These methods are useful in capturing semantic information carried by high-level units (such as phrases and beyond) and usually do not rely on word alignments.
they suffer from reduced accuracy for representing rare tokens, whose semantic information may not be well generalized.
contrasting
train_3477
This method makes full use of parallel corpora and produces high-quality word alignments.
it is unable to exploit the richer monolingual corpora.
contrasting
train_3478
In particular, the triangulation method, which translates by combining source-pivot and pivot-target translation models into a source-target model, is known for its high translation accuracy.
in the conventional triangulation method, information of pivot phrases is forgotten and not used in the translation process.
contrasting
train_3479
People with normal vision can read news documents with their eyes conveniently.
according to WHO's statistics, up to October 2013, 285 million people are estimated to be visually impaired worldwide: 39 million are blind and 246 have low vision.
contrasting
train_3480
There is some thematically related work, such as automatic filtering of pornographic content (Polpinij et al., 2006;Sood et al., 2012;Xiang et al., 2012;Su et al., 2004), but we believe the nature of the task is significantly different such that a different approach is required.
text or document classification, the general technique employed in this paper, is a very common task (Manning et al., 2008).
contrasting
train_3481
determining the ease with which a written text can be understood by a reader, since age is certainly a dimension along which readability varies.
our literature review of this area suggested that the aspects being considered in readability assessment are sufficiently different from the dimensions that seem to be most relevant for media age appropriatness ratings.
contrasting
train_3482
Due to the inherent subjectivity and poor definition of the task, mentioned above, it is difficult for annotators to reliably produce these annotations (Bryant and Ng, 2015).
this requirement can be relinquished by treating GEC as a text-to-text rewriting task and borrowing metrics from machine translation, as Park and Levy (2011) did with BLEU (Papineni et al., 2002) and METEOR (Lavie and Agarwal, 2007).
contrasting
train_3483
These four radicals altogether convey the meaning that "the moment when sun arises from the grass while the moon wanes away", which is exactly "morning".
it is hard to decipher the semantics of strokes, and radicals are the minimum semantic unit for Chinese.
contrasting
train_3484
Some Android applications 3 avoid drunk-texting by blocking outgoing texts at the click of a button.
to the best of our knowledge, these tools require a user command to begin blocking.
contrasting
train_3485
The training goal is to minimize average perplexity across X.
a deeper look into perplexity beyond corpus-wide average reveals interesting findings.
contrasting
train_3486
The information of background corpus has been incorporated using linear combination (Ponte and Croft, 1998;.
to the simple strategy which smooths all documents with the same background, recently corpus structures have been exploited for more accurate smoothing.
contrasting
train_3487
Generally, given a non-smoothed document language model P (w|d), which indicates a word distribution for a term w in document d, we attempt to generate a smoothed language model P (w|d + ) that could better estimate the text contents of a document d as d + to avoid zero probabilities for those words not seen in d. Arbitrary assignment of pseudo word counts such as add-λ to every unseen words once was a major improvement for language model smoothing (Chen and Goodman, 1996).
the purpose of smoothing is to estimate language model more accurately.
contrasting
train_3488
To the best of our knowledge, only (Li et al., 2010) used a supervised keyword extraction framework (based on KEA) with additional features, such as POS tags to performed keyword extraction on Facebook posts.
at that time Facebook status updates or posts did not contained either hashtags or user mentions.
contrasting
train_3489
This works quite well in text documents, such as news articles, as we wish to find terms that occur frequently within that document, but are not common in the other documents in that domain.
we found that this approach does not work well in Twitter as tweets tend to be short and generally most terms occur only once, including their keywords.
contrasting
train_3490
This is because most keywords only occur once 3 , which makes the TF component not very informative.
the MAUI baseline performs significantly better, this is because of the usage of many hand engineered features using lists of words and Wikipedia, rather than simply relying on word counts.
contrasting
train_3491
(2) Negation is an important feature for our task.
having it alone is not enough to find ironic instances.
contrasting
train_3492
These results empirically validate (H2).
even though the algorithm improves the classifier performance, the number of queries is small which suggests that a much larger dataset is needed.
contrasting
train_3493
In our project, we intend to build an email briefing system which extracts and summarizes important email information for the users.
we believe there are critical components missing from the current research work.
contrasting
train_3494
replies, opens) as the indicator of email importance.
we argue that the user action does not necessarily indicate the importance of the email.
contrasting
train_3495
The words "and", "is", "was" and "by" have similar geometric arrangements in Wikipedia and in Twitter, since these common words are not key differentiators for these corpora.
the pronouns "I" and "you", are heavily used in Twitter but rarely used in Wikipedia.
contrasting
train_3496
A negative adjusted distance value means the word is more similar than at least half of Word Twitter Most Similar Wikipedia Most Similar bc because bcus bcuz cuz cos bce macedon hellenistic euthydemus ptolemaic ill ll imma ima will youll unwell sick frail fated bedridden cameron cam nash followmecam camerons callmecam gillies duncan mckay mitchell bryce mentions unfollow reply respond strangerswelcomed offend mentions mentioned mentioning reference attested miss misss love missss missssss imiss pageant pageants titlehoder titlehoders pageantopolis yup yep yupp yeah yea yepp chevak yupik gwaii tlingit nunivak taurus capricorn sagittarius pisces gemini scorpio poniatovii scorpio subcompact sagittarius chevette Table 2: Characteristic Words in Twitter Corpora words in its bucket.
the words that are less similar than at least half of words in their buckets have positive adjusted distance values.
contrasting
train_3497
These supervised methods obtain high accuracies on newswire (Xue and Shen, 2003;Zhang and Clark, 2007;Jiang et al., 2009;Zhao et al., 2010;Sun and Xu, 2011).
manually annotated training data mostly come from the news domain, and the performance can drop severely when the test data shift from newswire to blogs, computer forums, and Internet literature (Liu and Zhang, 2012;).Supervised approaches often have a high requirement on the quality and quantity of annotated corpus, which is always not easy to build.
contrasting
train_3498
For example, in conversation 1 in Table 1, "如果(if)" which is a very important cue to predict tense of verb "废(destroy)" is omitted.
(2) Effects of interactions on tense: to other genres, conversations are interactive, which may have an effect on tense: in some cases, tense can only be inferred by understanding the interactions.
contrasting
train_3499
According to Table 2, n-grams and dependency parsing features 4 are useful to predict tense, and linguistic knowledge can further improve the accuracy of tense prediction.
adding conversation-specific features (interaction features) does not benefit Local(b+p+l).
contrasting