id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_19900
Besides this, the multi-layer architecture makes possible to add/remove layers when needed, which makes the system more flexible.
standoff annotations make possible to store the different annotations apart from the original text.
contrasting
train_19901
Besides this, the system is perfectly integrated in the TEITOK environment: it allows for complex queries at the token level using all the information stored in the corpus through CQP; it makes possible a visual representation of the learner text corrected at three different levels (orthographic, grammatical and lexical).
taking into account what we discussed in section 1, it is clear that this system presents some problems for error annotation: it only works at the token level 2 ; it offers a limited categorization and description of errors types; and it is limited to three linguistic areas, while some errors go beyond those areas.
contrasting
train_19902
For the moment, our results indicate that the token-based representation may account for most of the errors found.
these results may be biased by the fact that the annotator has tried to adjust the annotation to the tokenbased representation and we think that a deeper analysis is necessary to draw precise conclusions.
contrasting
train_19903
The SGATe (SLA in Grammatically Annotated Texts) resource comprises the entire EF-Cambridge Open Language Database (EFCAMDAT) annotated with 107 pedagogically relevant grammatical structures.
for diving deeper into the grammatical structures, we did not use the whole corpus, as we explain in this section.
contrasting
train_19904
We could have subdivided the North-Occitan subdomain (and the Croissant) into Limousin, Auvergnat and Vivaro-Alpine.
the more we multiply borders, the more we create problems: speakers from Velay, for example, are difficult to classify.
contrasting
train_19905
The applied protocol (a translation) can also have an influence, prompting us to take a critical look at the productions so elicited.
depending on the speakers, the translation can favour both calques and a search for maximum deviation from French.
contrasting
train_19906
Our analysis in Section 2 2 indicates that this could be achieved by issuing more search engine queries.
scannell (2007) argues that web crawling using seed URLs returned by the queries is important for building corpora for low-resource languages.
contrasting
train_19907
The results obtained by training on OLCA67 and then applying the model to OLCA68 (and the other way round) could indicate that the use of the same spelling system plays a positive role.
it should be mentioned that OLCA67 and OLCA68 also roughly correspond to the same vocabulary set, only for two different dialectal areas, and this could account for the results obtained.
contrasting
train_19908
Prior work on this task has relied on the use of MPs' division votes as sentiment polarity labels, under the assumption that these votes represent the speakers' opinions to-wards the subjects under discussion: votes for 'Aye' (that the motion be approved) or 'No' (that it be negated) are presumed to indicate positive and negative sentiment, respectively.
as MP voting is to a large extent constrained by party affiliations, with members often under pressure to follow the party whip regardless of their personal opinion (Searing, 1994;Norton, 1997), we perform sentiment analysis experiments on the Hansard Debates with Sentiment Tags (HanDeSeT) corpus, which features manually annotated sentiment labels in addition to those extracted from division votes .
contrasting
train_19909
For example, for motions that commend the Government, speeches which support the motion are likely to incorporate positive language, while those that oppose the motion will tend to include typically negative language.
for motions that oppose Government policy, speeches favourable to the motion are themselves also likely to use typically negative language towards the Government, and unfavourable speeches will conversely use positive language, as in Example 1.
contrasting
train_19910
They are wrong in principle and in this case.
we are realistic and we know that the Government have a majority.
contrasting
train_19911
2 In this step we perform sentence segmentation, tokenization, lemmatization, morphological analysis, part-of-speech tagging and dependency parsing, following the Universal Dependencies scheme (Nivre et al., 2016).
the pre-processing set-up is slightly complicated by the fact that the Norwegian language has two official written standards -Bokmål (the main variety) and Nynorsk -both of which are represented in the review corpus.
contrasting
train_19912
This has often been based on single-domain datasets, and examples include (for English unless otherwise noted) movie reviews collected from aggregator sites like IMDb.com (Pang and Lee, 2004;Maas et al., 2011) and RottenTomatoes.com (Pang and Lee, 2005;Socher et al., 2013), hotel reviews from TripAdvisor (Wang et al., 2010), book reviews (in Arabic) (Aly and Atiya, 2013), app reviews compiled from Apple App store and Google Play (Guzman and Maalej, 2014), and reviews of restaurants and other businesses in the Yelp open dataset.
5 the unbalanced nature of these datasets (single domains) can impose inherent limitations on the ability of 5 https://www.yelp.com/dataset models to generalize.
contrasting
train_19913
As we had only referenced if a sentence is positive or negative without taking into account the intensity of how it is expressed, we have not taken into consideration if a sentence has multiple occurrences of the same polarity on different words and only considered as it is positive or negative.
for the abstractive summaries we used Spanish Sentistrength (López et al., 2012) for the analysis.
contrasting
train_19914
This can be attributed to a high correlation of a classifier with words and phrases that are specific for the positive and negative utterances of the given domain.
language expresses some lexical means of conveying sentiment polarity in a way that is shared across different domains.
contrasting
train_19915
synsets that include both positive and negative LUs.
they comprise only 3.8% of all marked synsets, i.e.
contrasting
train_19916
between strong vs weak or weak vs neutral.
to SentiWordNet the manual annotation in plWordNet is done only on the level of LUs (Zaśko-Zielińska et al., 2015) and synsets are not manually assigned sentiment polarity values.
contrasting
train_19917
An observed precision and recall for positive and negative reviews is slightly different, especially when we compare a model using randomly generated lexicon (RAND) with the models using lexicons constructed in a controlled way (BASE, CPP-N).
the difference between rule-based propagation and CPP is small which may suggest that hybrid methods combining neural approaches with language resources are still imperfect for this Table 6: Precision (P), recall (R) and F-score (F) for specific polarity classes, in the task of sentence-level sentiment recognition with LR-LSTM.
contrasting
train_19918
encoding the noun-verb distinction).
all these studies have been based on a very small number of sign languages, they only focused on one aspect of iconicity (the choice between the object vs. handling handshapes), and only in one semantic field (namely, instruments).
contrasting
train_19919
Another approach is represetned by the ASL-Lex database (Caselli et al., 2016) which contains approximately 1000 ASL signs annotated (among other features) for iconicity ratings.
the ratings only reflect a degree of iconicity (on a 7 point scale) for the whole signs, and does not discuss iconic features.
contrasting
train_19920
(2013) who annotated more than 700 ASL signs for iconicity of the three major parameters: hanshape, location, and movement.
these parameters were only annotated as being iconic or non-iconic, without further analysis of iconicity.
contrasting
train_19921
In addition, the procedure of data collection lead to the fact that lexical signs and multi-sign descriptions of non-lexicalized concepts are not systematically distinguished.
since we focused only on the most basic concrete concepts, we consider the data to be good enough for our purposes.
contrasting
train_19922
Like the original Swadesh list, Woodward's list contains 100 items that are meant to identify basic/universal concepts which are supposed to reveal the degree to which pairs of SLs are related.
this method has not been systematically tested or applied to SLs.
contrasting
train_19923
Such a comparison produces results like those shown in Table 1 These approaches are based on pairwise comparisons of SLs and they show that the lexicostatistics method can be successfully applied to languages in the visual modality (but see Section 4 for commentary on some of the problems of this method of comparison).
previous research has not attempted a systematic comparison of a large sample of SLs.
contrasting
train_19924
Previous studies compared signs by looking at the global similarities of the four main classes of phonemes (Handshape, Location, Movement and Orientation).
none of them has been explicit on how similarity is measured.
contrasting
train_19925
In this study, we decided to consider all features and not to apply any weight correction.
we show the effect of collapsing some feature values for handshape and place of articulation.
contrasting
train_19926
One involves a picture sequence corresponding to the chronological sequence of the events depicted, about RMS Titanic.
to avoid productions too strictly focused on signing each picture in turn, we have included informational pictures (size of ship, number of life boats, etc.)
contrasting
train_19927
A major advantage of deep sequence generation models such as Recurrent Neural Networks (RNNs) is that they do not require the training data to be pre-segmented (Graves, 2013).
for eventual use in baseline networks or automatic sentence segmentation models, additional time annotations for separation of all lexical items in the corpus were determined.
contrasting
train_19928
We evaluated the corpus usability for the learning of JSL sentence structure with a straightforward modification of the sequence to sequence model (Seq2Seq) for English-French translation (Sutskever et al., 2014).
the determination of generated sequence quality is a difficult task that is commonly performed by rigging the generated sequences on a virtual character, and by subsequently assessing their naturalness and understandability in user studies.
contrasting
train_19929
Here, it should be noted that deep networks are commonly trained on much larger data sets.
since SL data collections cannot be acquired as easily as text or image data, the number of available training data can already be considered numerous for the given data content.
contrasting
train_19930
That particularly concerns Head and Neck Cancers (HNC), because their treatment can be mutilating and disabling.
the usual tools for assessing QoL are not relevant for measuring the impact of the treatment on the main functions involved by the sequelae.
contrasting
train_19931
One reason for this is that they combine experience with multiple sources of information.
there are some critical drawbacks of manual segmentation which make it impractical for large speech corpora.
contrasting
train_19932
The results indicate that the proportion of adverbs in the missing translations is 26.8% greater than the average proportion of all missing translations (33.1%).
the proportion of nouns is 7.9% less than the average, which infers that nouns tend not to be omitted.
contrasting
train_19933
Thus, names and numbers do not trigger problems for interpreters.
adverbs, which play a modifying role in sentences, similar to adjectives, show a 27.9% greater proportion of missing translations than adjectives.
contrasting
train_19934
We know that for multi-speaker acted emotions, classification rates usually reach high performance (for example with corpora such as EMO-DB).
with multi-speaker spontaneous speech, the classification rates are much lower, thus reflecting the difficulty to discriminate emotions in such a context (Schuller et al., 2009b).
contrasting
train_19935
For example, understanding image descriptions is crucial for interpreting the requests quoted above, as all of them contain image descriptions (my wedding dress; my dog's eyes; the people in the background; my ex).
to our knowledge, no work has yet attempted to tackle the specific task of automated image editing through natural language.
contrasting
train_19936
cases that contain an event introduced through a nominal expression.
she excludes those grammatical elements that introduce relative clauses or pronouns (as who in "I don't know who you are.")
contrasting
train_19937
id="4" source = "wiki 39733 Papa Pio XII" text: .. id="5" source = "wiki 1041014 Divisione Nazionale" text: .. Level-of-detail:Arg2-as-detail example id="1" source = "Adige 413952" text: .. sidered other 357 documents of the newspaper "L'Adige" (same source of CIB); we will refer to this source as Adige.
we search for additional examples in documents from Wikipedia 8 .
contrasting
train_19938
A widely adopted approach involves adopting a threshold for the number of overlapping n-grams randomly sampled be-tween each pair of documents (Broder, 1997).
the results depend on the accuracy and coverage of the sampling procedure in each of the documents, so that larger samples are more likely to produce more reliable indication of redundancy.
contrasting
train_19939
A sufficient number of the corpora in several languages, particularly in English, is freely available.
to the best of our knowledge, the Czech one is missing.
contrasting
train_19940
There have been attempts to compile corpora more represantative of everyday language by utilizing different sources, especially movie subtitles (Lison and Tiedemann 2016).
the availability of such sources is limited.
contrasting
train_19941
In order to accomplish this, we select "typical sentences", defined as sentences with a common syntactic structure (represented as a sequence of POS-tags).
to simplified or controlled languages (like Simplified English (Ogden 1932) or Kontrolliertes Deutsch (Lehrndorfer 1996) (controlled German) there is no set of handwritten rules for syntax and vocabulary.
contrasting
train_19942
The biomedical domain offers many linguistic resources for Natural Language Processing, including terminologies and corpora.
most of these resources are prominently available for English and the access to terminological resources in languages other than English may not be so simple.
contrasting
train_19943
The presentation of a new Xhosa lexicographical resource for a multilingual federated environment is an example for the transformation of isolated and unpublished dictionary data to the digital age.
the data set used to develop the BantuLM ontology is only a snapshot of a resource in development.
contrasting
train_19944
Discontinuous DMs are described as having two orthographic segments.
dMs are described as composed of a single token or as a multiword unit (phrasal).
contrasting
train_19945
Some schemas are linked to related frames defined in the FrameNet project (Baker et al., 1998;Ruppenhofer et al., 2006).
in practice, these links are often still missing.
contrasting
train_19946
The MetaNet repository readily provides a substantial number of such metaphors for a non-trivial number of schemas.
its lexical coverage is limited.
contrasting
train_19947
The paper reported high level of agreement of LCM tags between English verbs and their Polish translations, as Kappa scores ranged from 0.83 to 0.87 depending on person (experiment involved two linguists).
translating verbs from the General Inquirer dictionary into Polish and copying their LCM labels, was not a satisfactory method to obtain a complete LCM dictionary for Polish due to poor coverage.
contrasting
train_19948
There are shallow similarities between a WordNet and a thesaurus, which different sets of terms are grouped based on a meaning-similarity criteria.
the labels of the Wordnet are defined by the semantic relationship between the words or entries, while the clusters of words in a synonym dictionary may not follow any distinctive pattern of explicit meaning similarity (Miller, 1995).
contrasting
train_19949
In the second example, the synsets do not match so the classification was labeled as incorrect.
in the synset result (spa-30-05598147n) the word "nose" shows up, which is a part of one's face so it's related to what we were originally looking for (chin).
contrasting
train_19950
In general, we expected to have good precision for all types of dependencies, since each candidate is matched against the collocations represented in the ontology and the ontology is based on the FLN, which is manually constructed.
we had false positives due to parsing errors.
contrasting
train_19951
Amongst those, the Persian language is spoken by more than 110 million speakers world-wide and has more than 570K articles on Wikipedia.
it has been rarely studied for NER (Khormuji and Bazrafkan, 2014) or even just text processing (Shamsfard, 2011).
contrasting
train_19952
Similarly, terms like "Kreis" ("county" ) in "[Kreis Tuttlingen]" are included to distinguish the county from the city.
terms like "Ecke" ("corner" ) or "Kreuzung" ("intersection" ) are not included in the extent of Location-Street entities, because they are not an integral part of the location's name.
contrasting
train_19953
a CompanyProvidesProduct relation from a news text like "Sensata Technologies' products include speed sensors, motor protectors, and magnetic-hydraulic circuit breakers" , where the product argument refers to a non-consumer product or product class entity such as "speed sensors" or "magnetic-hydraulic circuit breakers" .
when it comes to such specific domains, developing named entity recognition algorithms is severely hampered by the lack of publicly available training data and the difficulty of accessing existing dictionary-type resources, such as product catalogs.
contrasting
train_19954
An LG is applied to these files without any markup and the NEs identified by it are annotated.
the segmented files are tokenized using the OpenNLP 2 library.
contrasting
train_19955
As with the setting of supervised learning, building NER systems needs a massive amount of labeled training data which are often annotated by humans.
for most languages, large-scale labeled datasets are only readily available in some domains, for example the news domain.
contrasting
train_19956
Embeddings are especially helpful when there is little training data, since they can be trained on a large amount of unlabeled data.
training embeddings for Chinese is not straightforward: Chinese is not word segmented, so embeddings for each word cannot be trained on a raw corpus.
contrasting
train_19957
Such resources are truly valuable only if they are enriched with different layers of linguistic annotation ranging from morphology and syntax to semantics.
there are many researchers who (want to) use corpora in their everyday work and look for various occurrences of specific words, forms or patterns, syntactic functions, etc.
contrasting
train_19958
In case of Sandhi, several examples in the Sand-hiKosh corpus contain more than two words to be joined.
all the three tools have provision for joining only two words at a time.
contrasting
train_19959
In fact, two levels of numbering (i.e., section and subsection identifiers) cover all complex sentences in CLTT.
this strategy could be easily extended to other numbering levels.
contrasting
train_19960
It was later modified to produce also the enhanced UD graphs from the basic trees, consulting the original LVTB data as well.
this leads to some inaccuracies, and we plan to rewrite the transformation so that the enhanced graph is built first as it closer follows the original hybrid representation, and then it is reduced to the basic dependencies.
contrasting
train_19961
So far we have conducted only limited preliminary experiments on the AMR annotation based on the underlying UD, PropBank and FrameNet layers, as well as the auxiliary named entity and coreference layers.
it seems feasible to systematically generate draft AMR annotations for manual post-editing, thus, boosting the productivity and acquiring more consistent AMRs.
contrasting
train_19962
For example, the Chinese Penn Treebank has clear annotations about the locations of traces in relative clauses but the fillers are 'WHNP' empty categories.
the head noun of a relative clause can always be reliably and accurately located given the tree structure.
contrasting
train_19963
For example, the Stanford dependencies of (2) will contain 'rcmod(news, do-not-have)' and the Stanford dependencies of (3) will contain 'rcmod(happiness, need)'.
this dependency label does not provide any information about whether the head noun is the first or second argument of the verb.
contrasting
train_19964
By these principles, we mapped the rcmod dependencies into subject or object relative clause dependencies in our annotations.
it is more difficult to recover nonlocal dependencies for topic relative clauses.
contrasting
train_19965
Shamao is annotated as a verb in (10) in Treebank annotations and shamao de forms a relative clause modifying xiangsheng 'man'.
the gcg-l parser parses the word as a noun and the structure of the noun phrase is 'noun + de + noun', which is also a very common structure for noun phrases in Mandarin Chinese.
contrasting
train_19966
Ideally, this is a word that corresponds to the historical original both in meaning and etymology.
depending on the strategy employed by the respective module, this may also be a translation, if a corresponding word cannot be found or if its meaning has changed.
contrasting
train_19967
A possible explanation for this is that English word order is more rigid and shows a higher number of right-headed dependencies (including explicit subjects).
italian and Spanish, characterized by a more flexible word order, show a higher variability at the level of dependency direction.
contrasting
train_19968
It is accessible both online and offline.
this browser is compatible with the database created by the polish editor, which makes it not independent from this editor.
contrasting
train_19969
The wordnets it resorts to have permissive licences for derivatives and redistribution and searching through the browser shows results in all their languages.
the source code of the browser itself is not available to be reused, and it is a browser that in any case offers no options to peruse wordnets on the basis of, direct or transitive, semantic relations.
contrasting
train_19970
By allowing to look also for the translations of the lemma searched for in the wordnet of interest, the browser presented in Section 4. permits the perusing of any wordnet, on which it is based, in a multilingual setting.
this still offers a quite limited compliance with a truly multilingual browser.
contrasting
train_19971
For example, the knowledge encoded in the WordNet relations part(elementary particle 1 n ,atom 1 n ), member(national 1 n ,country 3 n ) and substance(cartilage 1 n ,cartilaginous structure 3 n ) can also be inferred from SUMO.
according to our interpretation of the meronymy relations of WordNet, the knowledge in the relations part(cell 2 n ,cell nucleus 1 n ) and substance(grape 1 n ,wine 1 n ) is incompatible with SUMO.
contrasting
train_19972
According to our interpretation of the semantics of substance and the mapping information, we have to use the third QP and the SUMO predicate material r in order to translate the knowledge in substance(grape 1 n ,wine 1 n ) in terms of Adimen-SUMO.
fruitOrVegetable c is defined to be subclass of CorpuscularObject c in SUMO.
contrasting
train_19973
Primitives and Basic Concepts Conventional sense representation have used semantic primitives to define and achive canonical representation for concepts (Wierzbicka, 1972), such as Conceptual Dependency representation (Schank, 1975) and HowNet.
using primitives only to define concepts causes information degrading as it is almost impossible to understand a definition of a complex concept merely with primitives.
contrasting
train_19974
If orange 橙色 plays the role of object such as in (5), the sense definition should be applied in the composition process.
in (6), orange 橙色 plays the role of modifier so operational expression should be applied.
contrasting
train_19975
For example, {老師|teacher} in E-HowNet is a subcategory of { 專 業 人 士 |professional} therefore a hyponym of {human|人}.
'teacher', also denotes a kind of occupation and should be regarded as an 'occupation value' as well.
contrasting
train_19976
Therefore, CLIR is a suitable application for such a translation model.
a major obstacle to this approach is the lack of parallel corpora for model training.
contrasting
train_19977
It is possible to use a Web crawler to explore the candidate sites completely.
we can take advantage of the search engines again to accelerate the process.
contrasting
train_19978
Comparing HTML structures seems to be a sound way to evaluate candidate pairs since parallel pairs usually have similar HTML structures.
we also noticed that parallel texts may have quite different HTML structures.
contrasting
train_19979
For example, AT&T, the long distance company, provides their users the following options: "Please say information for information on placing a call, credit for requesting credit, or operator to speak to an operator."
given the improved speech recognition technology, and the research done in natural language dialogue over the last decade, there exists tremendous potential in enhancing these customer service centers by allowing users to conduct a more natural human-like dialogue with an automated system to provide a customer-friendly system.
contrasting
train_19980
Due to the large size of the database, we did not attempt to clean the data.
we did build several data structures based on the database which were used by the system.
contrasting
train_19981
The parser is domain-driven in the sense that it uses domain-dependent information produced by the lexicon to look for information, in a user utterance, that is useful in the current domain.
it does not attempt to understand fully each user utterance.
contrasting
train_19982
Horiguchi outlined how "spoken language pragmatic information" can be translated (Horiguchi, 1997).
she did not apply this idea to a dialogue translation system.
contrasting
train_19983
1 In this paper, we use the term syntactic dependency (tree) structure as defined in the Meaning-Text Theory (MTT;Mel'cuk, 1988).
we extrapolate from this theory when we use the term conceptual dependency (tree) structure, which has no equivalent in MTT (and is unrelated to Shank's CD structures proposed in the 1970s).
contrasting
train_19984
The PSyntSs may not be valid directly for realization or transfer since they may contain unsupported features or dependency relations.
the PSyntSs are represented in a way to allow the framework to convert them into valid DSyntS via lexicostructural processing.
contrasting
train_19985
This is unusual in the field of text processing which has generally dealt with well-punctuated text: some of the most commonly used texts in NLP are machine readable versions of highly edited documents such as newspaper articles or novels.
there are many types of text which are not so-edited and the example which we concentrate on in this paper is the output from ASR systems.
contrasting
train_19986
Figure 1: Example text shown in standard and ASR format ation which is not available in ASR output is sentence boundary information.
knowledge of sentence boundaries is required by many NLP technologies.
contrasting
train_19987
Both precision and recall are quite promising under these conditions.
this text is different from ASR text in one important way: the text is mixed case.
contrasting
train_19988
Reynar and Ratnaparkhi (1997) (Section 2) argued that a context of one word either side is sufficient for the punctuation disambiguation problem.
the results of our system suggest that this may be insufficient for the sentence boundary detection problem even assuming reliable part of speech tags (cf note 5).
contrasting
train_19989
This feature may make LSA a useful tool in the detection of a previous question that establishes a presupposed entity in a later question.
questionnaires differ from connected discourse, such as coherent stories, in aspects that make the present problem rather more difficult.
contrasting
train_19990
AbsSemRep I-el ~~ .................................... SemRep --(~------ Figure 3: A RAGS view of the CGS system In many NLG systems, (nominal) referring expression generation is an operation that is invoked at a relatively late stage, after the structure of individual sentences is fairly well specified (at least semantically).
referring expression generation needs to go right back to the original world model/knowledge base to select appropriate semantic content to realise a particular conceptual item as an NP (whereas all other content has been determined much earlier).
contrasting
train_19991
The identification and normalisation process described in the previous two sections are common to deceptive cognates, technical terms and numerical expressions altogether.
the comparison of the resulting normalised forms as well as the processing they should further undergo is of a rather case specific nature.
contrasting
train_19992
In addition, Table 5 shows that the English coreference results in better recall than Romanian coreference.
the recall shows a decrease for both languages for SNIZZLE because imprecise coreference links are deleted.
contrasting
train_19993
Predictive Annotation works best for Where, When, What, Which and How+adjective questions than for How+verb and Why questions, since the latter are typically not answered by phrases.
we observed that "by" + the present participle would usually indicate the description of a procedure, so we instantiate a METHODS QA-Token for such occurrences.
contrasting
train_19994
Our proposal for three levels of IE is modelled after the MUC standards using MUC-style representation.
we have modified the MUC IE task definitions in order to make them more useful and more practical.
contrasting
train_19995
In TREC-8 QA, this is not a problem since every question is guaranteed to have at least one answer in the given document pool.
in the real world scenario such as a QA portal, it is conceived that the IE results based on the processing of the documents should be complemented by other knowledge sources such as e-copy of yellow pages or other manually maintained and updated data bases.
contrasting
train_19996
If we were dealing with data that contains case information, we would also include fields representing the existence/non-existence of initial upper case for the five words.
since our current data does not include case information we do not include these features.
contrasting
train_19997
With the baseline case we achieve 70.4% precision but with 0% recall.
the decision tree approach obtains 77.1% precision and 73.8% recall.
contrasting
train_19998
If we take the baseline approach and assume that all unknown words are names, then we would achieve a precision of 70.4%.
using the decision tree approach, we obtain 86.5% precision and 92.9% recall.
contrasting
train_19999
For example, the edit distance feature in the misspelling identifier assumes that words consist of alphabetic characters which have undergone substitution/addition/deletion.
this feature will be less useful in a language such as Japanese or Chinese which use ideographic characters.
contrasting