id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_700
In transliteration, we face similar issues as in SMT, such as lexical mapping and alignment.
transliteration is also different from general SMT in many ways.
contrasting
train_701
Machine Translation (MT) quality has improved substantially in recent years due to applying data intensive statistical techniques.
state-ofthe-art approaches are essentially lexical, considering every surface word or phrase in both the source sentence and the corresponding translation as an independent entity.
contrasting
train_702
This is just an approximation of the true lower bound, and bad estimates can lead to search errors.
the hope is that by choosing the right value of i, these estimates will be accurate enough to affect the search quality only slightly, which is analogous to "almost admissible" heuristics in A* search (Soricut, 2006).
contrasting
train_703
Most of the previous work on statistical machine translation relies on (local) associations of target words/phrases with source words/phrases for lexical selection.
in this paper, we present a novel approach to lexical selection where the target words are associated with the entire source sentence (global) without the need to compute local associations.
contrasting
train_704
Traditionally, Maxent is regularized by imposing a Gaussian prior on each weight: this L2 regularization finds the solution with the smallest possible weights.
on tasks like machine translation with a very large number of input features, a Laplacian L1 regularization that also attempts to maximize the number of zero weights is highly desirable.
contrasting
train_705
Because of this synonym problem, the BOW threshold θ has to be set lower than 0.5, which is observed experimentally.
if we set the threshold to 0.3, both t 1 and t 2 will be detected in the target sentence, and we found this to be a major source of undesirable insertions.
contrasting
train_706
Both constraints have been shown to be in very good fit with data from dependency treebanks (Kuhlmann and Nivre, 2006).
like all other such proposals, they are formulated on fully specified structures, which makes it hard to integrate them into a generative model, where dependency structures are composed from elementary units of lexicalized information.
contrasting
train_707
Sandy gave gave dog the dog gave bone a bone This paper does not consider stochastic dependency grammars directly, but see Section 8 for an application involving them.
it is straightforward to associate weights with dependencies, and since the dependencies are preserved by the transformations, obtain a weighted CFG.
contrasting
train_708
Almost linear CFLGs can represent a substantial fragment of a Montague semantics for English and such "linear" grammar formalisms as (multi-component) tree-adjoining grammars (both as string grammars and as tree grammars) and multiple context-free grammars.
iO macro grammars and parallel multiple context-free grammars cannot be directly represented because representing string copying requires multiple occurrences of a variable of type o → o.
contrasting
train_709
The overall performance of our semantic role labeling approach is not competitive with leading contemporary systems, which typically employ support vector machine learning algorithms with syntactic features (Pradhan et al., 2005) or syntactic tree kernels (Moschitti et al., 2006).
our work highlights a number of characteristics of the semantic role labeling task that will be helpful in improving performance in future systems.
contrasting
train_710
A strong syntactic/semantic correlation would suggest that further gains in the use of surrogate annotation data could be gained if syntactic similarity was computed between rolesets rather than their verbs.
this would first require accurate word-sense disambiguation both for the test sentences as well as for the parsed corpora used to calculate parse tree path frequencies.
contrasting
train_711
These featurebased methods are considered as the state of the art methods for SRL.
as we know, the standard flat features are less effective in modeling the syntactic structured information.
contrasting
train_712
In this point, one can say that the grammar-driven tree kernel is a specialization of the PT kernel.
the important difference between them is that the PT kernel is not grammar-driven, thus many nonlinguistically motivated structures are matched in the PT kernel.
contrasting
train_713
The labeled order tree kernel is much more flexible than the PT kernel and can explore much larger sub-tree features than the PT kernel.
the same as the PT kernel, the labeled order tree kernel is not grammar-driven.
contrasting
train_714
Their methods need to obtain a LTAG derivation tree for each parse tree before kernel calculation.
we use the notion of optional arguments to define our grammar-driven tree kernel and use the empirical set of CFG rules to determine which arguments are optional.
contrasting
train_715
When presented with some previously unseen test data, we are forced to rely on its automatic parse trees.
for the best results we should take advantage of gold parse trees whenever possible, including those of the labeled training data.
contrasting
train_716
(Caruana, 1997) discusses configurations where both used inputs and unused inputs (due to excessive noise) are utilized as additional outputs.
our work concerns linear predictors using empirical risk minimization.
contrasting
train_717
The number of hard cases specific to the B-I classifier indicates how the features contribute to the decision of splitting or continuing back-to-back NPs.
back-to-back NPs amount to 6% of the NPs in HEb Gold and 8% of the NPs in EN G. while in English most of these cases are easily resolved, Hebrew phenomena such as null-equatives and free word order make them harder.
contrasting
train_718
In some recent work (Strube and Ponzetto, 2006), it has been shown that related pairs can be generated without pre-specifying the nature of the relation sought.
this work does not focus on differentiating among different relations, so that the generated relations might conflate a number of distinct ones.
contrasting
train_719
This allows the many parsers based on the Penn Treebank, for example, to be meaningfully compared.
there are two drawbacks to this approach.
contrasting
train_720
The most common form of parser evaluation is to apply the Parseval metrics to phrase-structure parsers based on the Penn Treebank, and the highest reported scores are now over 90% (Bod, 2003;Charniak and Johnson, 2005).
it is unclear whether these high scores accurately reflect the performance of parsers in applications.
contrasting
train_721
RASP uses an unlexicalised parsing model and has not been tuned to newspaper text.
it has had many years of development; thus it provides a strong baseline for this test set.
contrasting
train_722
For the purposes of discussion, we will suppose that X = R F and that Y = {−1, +1}.
most of the techniques described in this section (as well as our own technique) are more general.
contrasting
train_723
We could equally well use Φ s (x) = x, x and Φ t (x) = x, 0 .
it turns out that it is easier to analyze the first case, so we will stick with that.
contrasting
train_724
Indeed, all the above methods do not make use of the unlabeled instances in the target domain.
our instance weighting framework allows unlabeled target instances to contribute to the model estimation.
contrasting
train_725
In Section 4 we will develop a general framework for semi-supervised learning with constraints.
it is useful to illustrate the ideas on concrete problems.
contrasting
train_726
This confirms results reported for the supervised learning case in (Punyakanok et al., 2005;.
as shown, our proposed algorithm H&W&C for training with constraints is critical when the amount labeled data is small.
contrasting
train_727
For a formally organized event, such as the annual MT Evaluation sponsored by National Institute of Standard and Technology (NIST MT Eval), it may be worthwhile to recruit multiple human translators to translate a few hundred sentences for evaluation references.
there are situations in which multiple human references are not practically available (e.g., the source may be of a large quantity, and no human translation exists).
contrasting
train_728
The results show that pseudo references are informative, as standard metrics were able to make use of the pseudo references and achieve higher correlations than judging from fluency alone.
higher correlations are achieved when learning with regression, suggesting that the trained metrics are better at interpreting comparisons against pseudo references.
contrasting
train_729
Moreover, we can gain from performing another step.
the inclusion of the English-Chinese dictionary is harmful in this case, probably because 1-to-n alignments are less frequent for this direction, and have been captured during the first step.
contrasting
train_730
This work was extended in (Rosti et al., 2007) by introducing system weights for word confidences.
the system weights did not influence the skeleton selection, so a hypothesis from a system with zero weight might have been chosen as the skeleton.
contrasting
train_731
Also, the METEOR score using the METEOR optimized weights is very high.
the other scores are worse in common with the tuning set results.
contrasting
train_732
They alone bring the performance in the MF up to 75%.
these two features explain only 56% of the cases in the VF.
contrasting
train_733
Second, as demonstrated in the Redwood Lingo Treebank, reversibility makes it easy to rapidly create very large evaluation suites: it suffices to parse a set of sentences and select from the parser output the correct semantics.
nLG geared realisers either work on evaluation sets of restricted size (500 input for SURGE, 210 for KPML) or require the time expensive implementation of a preprocessor transforming e.g., Penn Treebank trees into a format suitable for the realisers.
contrasting
train_734
, r n ) of the semantic requirement, and there is one universal quantifier for y and for each parameter x j of the action except for ref(id(r)).
a distractor a for a referring expression introduced at u is removed when we substitute or adjoin an elementary tree into u which rules a out.
contrasting
train_735
One might consider using a metric based on language model probabilities for sentences: in eval-uating a language model on (already existing) test data, a higher probability for a sentence (and lower perplexity over a whole test corpus) indicates better language modelling; perhaps a higher probability might indicate a better sentence.
here we are looking at generated sentences, which have been generated using their own language model, rather than human-authored sentences already existing in a test corpus; and so it is not obvious what language model would be an objective assessment of sentence naturalness.
contrasting
train_736
As in late fusion, modalityspecific classifiers are trained independently.
the Bayesian approach also learns to predict the reliability of each modality on a given instance, and incorporates this information into the Bayes net.
contrasting
train_737
The gesture features described thus far capture the similarity between static gestures; that is, gestures in which the hand position is nearly constant.
these features do not capture the similarity between gesture trajectories, which may also be used to communicate meaning.
contrasting
train_738
More specifically, assuming that the NM has a positive effect, the S users are asked to rate first the poorer version of the system (noNM) and then the better version (NM).
f users' task is easier as they already have a high reference point (NM) and it is easier for them to criticize the second problem (noNM).
contrasting
train_739
The most salient difference is that here we investigate the benefits of displaying the discourse structure information for the users.
(Rich and Sidner, 1998) never test the utility of the SIH.
contrasting
train_740
Studies have also shown that eye gaze has a potential to improve resolution of underspecified referring expressions in spoken dialog systems (Campana et al., 2001) and to disambiguate speech input (Tanaka, 1999).
to these earlier studies, our work focuses on a different goal of using eye gaze for automated vocabulary acquisition and interpretation.
contrasting
train_741
Wittenburg et al experiment with unrestricted speech input for electronic program guide search, and use a highlighting mechanism to provide feedback to the user regarding the "relevant" terms the system understood and used to make the query.
their usability study results show this complex output can be confusing to users and does not correspond to user expectations.
contrasting
train_742
The parser uses a representation for syntactic structure similar to dependency links which is well-suited for incremental parsing.
to previous unsupervised parsers, the parser does not use part-of-speech tags and both learning and parsing are local and fast, requiring no explicit clustering or global optimization.
contrasting
train_743
A shortcoming of the DP-based approaches is that they are unable to generate nonprojective structures.
non-projectivity is necessary to capture syntactic phenomena in many languages.
contrasting
train_744
The ordinary f-score is computed that way mostly in order to overcome the fact that sentences differ in length.
for applications such as IE and QA, which work at the single sentence level and which might reach erroneous decision due to an inaccurate parse, normalizing over sentence lengths is less of a factor.
contrasting
train_745
Rather, as indicated by the 74% accuracy, they also consider the reputation of the merchants.
the real value of the postings relies on the text and not on the numeric ratings: the accuracy is 87%-89% when using the textual reputation variables.
contrasting
train_746
The reason seems to be that the generation of a ranking by negativity seems a somehow harder task than the generation of a ranking by positivity; this is also shown by the results obtained with the uniform-valued vector e1, in which the application of PageRank improves with respect to e1 for positivity but deteriorates for negativity.
against the baseline constituted by the results obtained with the uniformvalued vector e1 for negativity, our rankings show a relevant improvement, ranging from −8.56% (e2) to −48.27% (e4).
contrasting
train_747
This excerpt expresses an overall negative opinion of the product being reviewed.
not all parts of the review are negative.
contrasting
train_748
The local dependencies between sentiment labels on sentences is similar to the work of Pang and Lee (2004) where soft local consistency constraints were created between every sentence in a document and inference was solved using a min-cut algorithm.
jointly modeling the document label and allowing for non-binary labels complicates min-cut style solutions as inference becomes intractable.
contrasting
train_749
That is, a system could start by classifying documents, use the document information to classify sentences, use the sentence information to classify documents, and repeat until convergence.
experiments showed that this did not improve accuracy over a single iteration and often hurt performance.
contrasting
train_750
Automatic sentiment classification has been extensively studied and applied in recent years.
sentiment is expressed differently in different domains, and annotating corpora for every possible domain of interest is impractical.
contrasting
train_751
We believe the most important reason for this is that they explore structured prediction problems, where labels of surrounding words from the source classifier may be very informative, even if the current label is not.
our simple binary prediction problem does not exhibit such behavior.
contrasting
train_752
The extraction of question-answer pairs amounted to a database of 1 million pairs in their experiment.
inspection of the publicly available Web-FAQ collection provided by Jijkoun and de Rijke 2 showed a great amount of noise in the retrieved FAQ pages and question-answer pairs, and yet the indexed question-answer pairs showed a serious recall problem in that no answer could be retrieved for many well-formed queries.
contrasting
train_753
In our experiments, we employed equation 3to infer for each para-phrase pair translation model probabilities p φ (syn|trg) and p φ (trg|syn) from relative frequencies of phrases in bilingual tables.
to Bannard and Callison-Burch (2005), we applied the same inference step to infer also lexical translation probabilities p w (syn|trg) and p w (trg|syn) as defined in Koehn et al.
contrasting
train_754
In order to maximize recall for the comparative evaluation of systems, we selected 60 queries that were well-formed natural language questions without metacharacters and spelling errors.
for one third of these well-formed queries none of the five compared systems could retrieve an answer.
contrasting
train_755
This evaluation measure accounts for improvements in coverage, i.e., it rewards cases where answers are found for queries that did not have an adequate or material answer before.
the mean reciprocal rank (MRR) measure standardly used in QA can have the effect of preferring systems that find answers only for a small set of queries, but rank them higher than systems with Table 4: Examples for queries and expansion terms yielding improved (+), decreased (-), or unchanged (0) retrieval performance compared to retrieval without expansion.
contrasting
train_756
Such is the case with Peter's short speech in the second half of Luke 9:33 (see Table 1).
unimportant details may be deleted, and new information weaved in from other sources or oral traditions.
contrasting
train_757
Second, we place no hard restrictions on the reordering of the source text, opting instead for a soft preference for maintaining the source order through the Order feature.
deviation from the source order is limited to "flips" between two sentences in (Barzilay and Elhadad, 2003), an assumption that is not valid in the Synoptics 6 .
contrasting
train_758
To a certain extent, these lexical networks enable topic segmenters to exploit a sort of concept reiteration.
their lack of any explicit topical structure makes this kind of knowledge difficult to use when lexical ambiguity is high.
contrasting
train_759
Other work suggests a clear utility for generating language manifesting personality (Reeves and Nass, 1996).
to date, (1) research in generation has not systematically exploited the psycholinguistic findings; and (2) there has been little evaluation showing that automatic generators can produce language with recognizable personality variation.
contrasting
train_760
So, we must first map the psychological findings to parameters of a natural language generator (NLG).
this presents several challenges: (1) The findings result from studies of genres of language, such as stream-ofconsciousness essays (Pennebaker and King, 1999), and informal conversations (Mehl et al., 2006), and thus may not apply to fixed content domains used in NLG; (2) Most findings are based on self-reports of personality, but we want to affect observer's perceptions; (3) The findings consist of weak but significant correlations, so that individual parameters may not have a strong enough effect to produce recognizable variation within a single utterance; (4) There are many possible mappings of the findings to generation parameters; and (5) It is unclear whether only specific speech-act types manifest personality or whether all utterances do.
contrasting
train_761
As expected, the best segmentation results are obtained using manual transcripts.
the gap between audio-based segmentation and transcript-based segmentation narrows when the recognition accuracy decreases.
contrasting
train_762
Basically, the BiTAM model consists of topic-dependent translation lexicons modeling P r(c|e, k) where c, e and k denotes the source Chinese word, target English word and the topic index respectively.
the bLSA framework models P r(c|k) and P r(e|k) which is different from the BiTAM model.
contrasting
train_763
Another observation is that the CH→EN bLSA model seems to give better performance than the EN→CH bLSA model.
their differences are not significant.
contrasting
train_764
Recently, there has been a surge of interest in improving coreference resolution by jointly modeling coreference with a related task such as MD (e.g., Daumé and Marcu (2005)).
joint models typically need to be trained on data that is simultaneously annotated with information required by all of the underlying models.
contrasting
train_765
, p, where S i is the i th tree of text segments, and T i is the table-of-contents for that tree.
we cannot directly use these tablesof-contents for training our global model: since this model selects one of the candidate titles z i 1 , .
contrasting
train_766
Admittedly, this method of generating training and testing data omits some dependencies at the level of the table-of-contents as a whole.
the subtrees used in our experiments still exhibit a sufficiently deep hierarchical structure, rich with contextual dependencies.
contrasting
train_767
If we do not cluster the words according to their part-of-speech, we also lose some performance, obtaining 78.6% at best.
clustering all words (such as CC, DT, IN part-of-speech tags) also gives weaker results (81.1% accuracy at best).
contrasting
train_768
Recent results have shown that symbolic noun compound interpretation systems using machine learning techniques coupled with a large lexical hierarchy perform with very good accuracy, but they are most of the time tailored to a specific domain (Rosario and Hearst, 2001).
the majority of corpus statistics approaches to noun compound interpretation collect statistics on the occurrence frequency of the noun constituents and use them in a probabilistic model (Lauer, 1995).
contrasting
train_769
(Kim and Baldwin, 2006) and (Turney, 2006) focus on the lexical similarity of unseen noun compounds with those found in training.
although the web-based solution might overcome the data sparseness problem, the current probabilistic models are limited by the lack of deep linguistic information.
contrasting
train_770
As formulated above, the learning task can be seen as an instance of multiple instance learning.
there are important properties that set it apart from problems previously considered in MIL.
contrasting
train_771
present a systematic investigation of the pattern representation models and point out that substructures of the linguistic representation and the access to the embedded structures are important for obtaining a good coverage of the pattern acquisition.
all considered representation models (subject-verbobject, chain model, linked chain model and subtree model) are verb-centered.
contrasting
train_772
With one randomly selected seed, we could finally extract most relevant events in some covered time interval.
it turns out that it is not just the average number of reports per events that matters but also the distribution of reportings to events.
contrasting
train_773
(2002) reported that discourse structure helps to extract anaphoric relations.
their set of grammatical rules is heuristic.
contrasting
train_774
If the system works without a NER component, it only knows that "Oracle" and "Peo-pleSoft" are proper noun phrases, and its confidence in correctness of a candidate relation instance Acquisition(Oracle, PeopleSoft) cannot be very high.
both entities occur many times elsewhere in the corpus, sometimes in strongly discriminating contexts, such as "Oracle is a company that…" or "PeopleSoft Inc." If the system somehow learned that such contexts indicate entities of the correct type for the Acquisition relation (i.e., companies), then the system would be able to boost its confidence in both entities ("Oracle" and "PeopleSoft") being of correct types and, consequently, in (Oracle, PeopleSoft) being a correct instance of the Acquisition relation.
contrasting
train_775
Therefore, in such case, the simplest recourse is to simply label the entity as Invalid, and not to try fixing the boundaries.
if a word was missed from an entity (e.g., "Beverly O", instead of "Beverly O ' Neill"), the resulting sequence will be frequent.
contrasting
train_776
The Corpusbased validator, however, works purely on the basis of context, entirely disregarding the internal structure of entities, and thus performs worst of all in this case.
the Corpus-based validator is able to improve the results for the Inventor relation, which the other two validators are completely unable to do.
contrasting
train_777
Since the technical root comes before the sentence itself, no new non-projective edges are introduced.
edges from technical roots may introduce non-planarity.
contrasting
train_778
They confirm the findings of Kuhlmann and Nivre (2006): planarity seems to be almost as restrictive as projectivity; well-nestedness, on the other hand, covers large proportions of trees in all languages.
to global constraints, properties of individual non-projective edges allow us to pinpoint the causes of non-projectivity.
contrasting
train_779
When applying self-training to a parser trained with a small dataset we expect the coverage of the parser to increase, since the combined training set should contain items that the seed dataset does not.
since the accuracy of annotation of such a parser is poor (see the no self-training curve in Figure 1) the combined training set surely includes inaccurate labels that might harm parser performance.
contrasting
train_780
This is largely because HPSG is based on a lexicalized grammar formalism, and as such its syntactic structures have an underlying dependency backbone.
hPSG syntactic structures includes long-distance dependencies, and the underlying dependency structure described by and hPSG structure is a directed acyclic graph, not a dependency tree (as used by mainstream approaches to data-driven dependency parsing).
contrasting
train_781
The dynamic SBNs which we peopose, called Incremental Sigmoid Belief Networks (ISBNs) have large numbers of latent variables, which makes exact inference intractable.
they can be approximated sufficiently well to build fast and accurate statistical parsers which induce features during training.
contrasting
train_782
Sigmoid Belief Networks were used originally for character recognition tasks, but later a dynamic modification of this model was applied to the reinforcement learning task (Sallans, 2002).
their graphical model, approximation method, and learning method differ significantly from those of this paper.
contrasting
train_783
The assumption in this step is that transliteration of each vowel sequence of the source is a vowel sequence in the target language, and similarly for consonants.
consonants do not always map to consonants, or vowels to vowels (for example, the English letter "s" may be written as " " in Persian which consists of one vowel and one consonant).
contrasting
train_784
The process of aligning words explained above can handle words with already known components in the alignment set A (the frequency of occurrence is greater than zero).
when this is not the case, the system may repeatedly insert ε while part or all of the target characters are left intact (unsuccessful alignment).
contrasting
train_785
Thus in (NP the Manhattan phone book and tour guide) 6 a flat structure is given because although the is a non-nominal modifier, it is shared, modifying both tour guide and phone book, and all other modifiers in the phrase are nominal.
we found that out of 1,417 examples of NP coordination in sections 02 to 21, involving phrases containing only nouns (common nouns or a mixture of common and proper nouns) and the coordinating conjunction, as many as 21.3%, contrary to the guidelines, were given internal structure, instead of a flat annotation.
contrasting
train_786
For example, there were 3,314 cases in which sentence boundary detection needs to use the results of extra line break detection, extra punctuation mark detection, and case restoration.
in the cascaded method, sentence boundary detection is conducted after extra punctuation mark detection and before case restoration, and thus it cannot leverage the results of case restoration.
contrasting
train_787
In essence, such methods utilize extraction patterns to generate candidate extractions (e.g., "Istanbul") and then assess each candidate by computing co-occurrence statistics between the extraction and words or phrases indicative of class membership (e.g., "cities such as").
zipf's Law governs the distribution of extractions.
contrasting
train_788
Chiang (2005) shows significant improvement by keeping the strengths of phrases while incorporating syntax into statistical translation.
the performance of linguistically syntax-based models can be hindered by making use of only syntactic phrase pairs.
contrasting
train_789
These multiword phrasal units contribute to fluency by inherently capturing intra-phrase reordering.
despite this progress, interphrase reordering (especially long distance ones) still poses a great challenge to statistical machine translation (SMT).
contrasting
train_790
Other further generalizations of orientation include the global prediction model (Nagata et al., 2006) and distortion model (Al-Onaizan and Papineni, 2006).
these models are often fully lexicalized and sensitive to individual phrases.
contrasting
train_791
In our own exper-iment setting, the best distortion limit for Chinese-English translation is 4.
some ideal translations exhibit reorderings longer than such distortion limit.
contrasting
train_792
For example, for the last two Chinese phrases in figure 1(a), simply swapping the two children of the NP node will produce the correct word order on the English side.
there are also reorderings which do not agree with syntactic analysis.
contrasting
train_793
Consequently, P (t|w,θ) favors t = 1 for any sequence that does not contain exactly five heads, and assigns equal probability to t = 1 and t = 0 for any sequence that does contain exactly five heads -a counterintuitive result.
using some standard results in Bayesian analysis we can show that applying Equation 3 yields which is significantly less than .5 when n H = 5, and only favors t = 1 for sequences where n H ≥ 8 or n H ≤ 2.
contrasting
train_794
Consequently, it makes sense to use a Dirichlet prior with β < 1.
as noted by Johnson et al.
contrasting
train_795
For POS tagging, estimates based on multiple samples might be useful if we were interested in, for example, the probability that two words have the same tag.
computing such probabilities across all pairs of words does not necessarily lead to a consistent clustering, and the result would be difficult to evaluate.
contrasting
train_796
(2003) improved the CRF method by employing the large margin method to separate the gold standard sequence la-beling from incorrect labellings.
the complexity of quadratic programming for the large margin approach prevented it from being used in large scale NLP tasks.
contrasting
train_797
This simple example has shown the advantage of adopting a flexible search strategy.
it is still unclear how we maintain the hypotheses, how we keep candidates and accepted labels and spans, and how we employ dynamic programming.
contrasting
train_798
This can also explain why the performance of leftto-right search with non-aggressive learning is close to bidirectional search if the beam is large enough.
with beam width = 1, non-aggressive learning over left-to-right search performs much worse, because in this case it is more likely that the gold-standard tag is not in the beam.
contrasting
train_799
It is reported in (Toutanova et al., 2003) that a crude company name detector was used to generate features, and it gave rise to significant improvement in performance.
it is difficult for us to duplicate exactly the same feature for the purpose of comparison, although it is convenient to use features like that in our framework.
contrasting