Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C02-1012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:19:31.936313Z"
},
"title": "Chinese Named Entity Identification Using Class-based Language Model 1",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Beijing University of Posts & Telecommunications",
"location": {
"country": "China"
}
},
"email": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Changning",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "C02-1012",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "1 This work was done while the author was visiting Microsoft Research Asia",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We consider here the problem of Chinese named entity (NE) identification using statistical language model(LM). In this research, word segmentation and NE identification have been integrated into a unified framework that consists of several class-based language models. We also adopt a hierarchical structure for one of the LMs so that the nested entities in organization names can be identified. The evaluation on a large test set shows consistent improvements. Our experiments further demonstrate the improvement after seamlessly integrating with linguistic heuristic information, cache-based model and NE abbreviation identification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "$EVWUDFW",
"sec_num": null
},
{
"text": "1(LGHQWLILFDWLRQ is the key technique in many applications such as information extraction, question answering, machine translation and so on. English NE identification has achieved a great success. However, for Chinese, NE identification is very different. There is no space to mark the word boundary and no standard definition of words in Chinese. The Chinese NE identification and word segmentation are interactional in nature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ",QWURGXFWLRQ",
"sec_num": null
},
{
"text": "This paper presents a unified approach that integrates these two steps together using a class-based LM, and apply Viterbi search to select the global optimal solution. The class-based LM consists of two sub-models, namely the context model and the entity model. The context model estimates the probability of generating a NE given a certain context, and the entity model estimates the probability of a sequence of Chinese characters given a certain kind of NE. In this study, we are interested in three kinds of Chinese NE that are most commonly used, namely person name (PER), location name (LOC) and organization name (ORG) . We have also adopted a variety of approaches to improving the LM. In addition, a hierarchical structure for organization LM is employed so that the nested PER, LOC in ORG can be identified.",
"cite_spans": [
{
"start": 620,
"end": 625,
"text": "(ORG)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": ",QWURGXFWLRQ",
"sec_num": null
},
{
"text": "The evaluation is conducted on a large test set in which NEs have been manually tagged. The experiment result shows consistent improvements over existing methods. Our experiments further demonstrate the improvement after integrating with linguistic heuristic information, cache-based model and NE abbreviation identification. The precision of PER, LOC, ORG on the test set is 79.86%, 80.88%, 76.63%, respectively; and the recall is 87.29%, 82.46%, 56.54%, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ",QWURGXFWLRQ",
"sec_num": null
},
{
"text": "Recently, research on English NE identification has been focused on the machine-learning approaches, including hidden Markov model (HMM), maximum entropy model, decision tree and transformation-based learning, etc. (Bikel et al, 1997; Borthwick et al, 1999; Sekine et al, 1998) . Some systems have been applied to real application.",
"cite_spans": [
{
"start": 215,
"end": 234,
"text": "(Bikel et al, 1997;",
"ref_id": "BIBREF1"
},
{
"start": 235,
"end": 257,
"text": "Borthwick et al, 1999;",
"ref_id": "BIBREF0"
},
{
"start": 258,
"end": 277,
"text": "Sekine et al, 1998)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "5HODWHG:RUN",
"sec_num": null
},
{
"text": "Research on Chinese NE identification is, however, still at its early stage. Some researches apply methods of English NE identification to Chinese. Yu et al (1997) applied the HMM approach where the NE identification is formulated as a tagging problem using Viterbi algorithm. In general, current approaches to NE identification (e.g. Chen, 1997) usually contain two separate steps: word segmentation and NE identification. The word segmentation error will definitely lead to errors in the NE identification results. Zhang (2001) put forward class-based LM for Chinese NE identification. We further develop this idea with some new features, which leads to a new framework. In this framework, we integrate Chinese word segmentation and NE identification into a unified framework using a class-based language model (LM).",
"cite_spans": [
{
"start": 148,
"end": 163,
"text": "Yu et al (1997)",
"ref_id": "BIBREF8"
},
{
"start": 335,
"end": 346,
"text": "Chen, 1997)",
"ref_id": "BIBREF4"
},
{
"start": 517,
"end": 529,
"text": "Zhang (2001)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "5HODWHG:RUN",
"sec_num": null
},
{
"text": "The n-gram LM is a stochastic model which predicts the next word given the previous n-1 words by estimating the conditional probability P(w n |w 1 \u2026w n-1 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "In practice, trigram approximation P(w i |w i-2 w i-1 ) is widely used, assuming that the word w i depends only on two preceding words w i-2 and w i-1 . Brown et al (1992) put forward and discussed n-gram models based on classes of words. In this section, we will describe how to use class-based trigram model for NE identification. Each kind of NE (including PER, LOC and ORG) is defined as a class in the model. In addition, we differentiate the transliterated person name (FN) from the Chinese person name since they have different constitution patterns. The four classes of NE used in our model are shown in Table 1 Given a Chinese character sequence 6=V \u00abV Q , the task of Chinese NE identification is to find the optimal class sequence &=F \u00abF P (P<=Q) that maximizes the probability 3&_6. It can be expressed in the equation 1and we call it class-based model.",
"cite_spans": [
{
"start": 153,
"end": 171,
"text": "Brown et al (1992)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 612,
"end": 619,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "The class-based model consists of two sub-models: the context model 3& and the entity model P (S|C). The context model indicates the probability of generating a NE class given a (previous) context. P(C) is a priori probability, which is computed according to Equation 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "\u00d5 = - - @ P L L L L F F F 3 & 3 1 1 2 ) | ( ) (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "(2) P(C) can be estimated using a NE labeled corpus. The entity model can be parameterized by Equation 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "\u00d5 = - - - - @ @ = P M M HQG F VWDUW F P Q VWDUW F HQG F P Q F V V 3 F F V V V V 3 F F V V 3 & 6 3 M M P 1 1 1 1 1 ) | ] ... ([ ) ... | ] ... ]...[ ... ([ ) ... | ... ( ) | ( 1 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "The entity model estimates the generative probability of the Chinese character sequence in square bracket pair (i.e. starting from F M VWDUW to F M HQG) given the specific NE class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "For different class, we define the different entity model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "For the class of PER (including PN and FN), the entity model is a FKDUDFWHUEDVHG trigram model as shown in Equation 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "\u00d5 - - = - - - - = = = HQG F VWDUW F N M N N N M HQG F VWDUW F M M M M 3(5 F V V V 3 3(5 F V V 3 ) , , | ( ) | ] ... ([ 1 2 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "where s can be any characters occurred in a person name. For example, the generative probability of character sequence \"\u00fb (Li Dapeng) is much larger than that of \u00eeH (many years) given the PER since \" is a commonly used family name, and \u00fb and are commonly used first names. The probabilities can be estimated with the person name list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "For the class of LOC, the entity model is a ZRUGEDVHG trigram model as shown in Equation (5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": ") | ( max arg * 6 & 3 & & = ) | ( ) ( max arg & 6 3 & 3 &= (1) ) | ] ... ([ /2& F V V 3 M HQG F VWDUW F M M = - - @ /2& F Z Z _ Z 3 > PD[ /2& F _ Z Z 3 PD[ O N M N N N : M O : \u00d5 = - - = = = \u00bb (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "where W = w 1 \u2026w l is possible segmentation result of character sequence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "HQG F VWDUW F M M V V - - ... .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "For the class of ORG, the construction is much more complicated because an ORG often contain PER and/or LOC. For example, the ORG \" \u00d1 \u00d1 \u00fe N ( \u00cc \" (Air China Corporation) contains the LOC \"\u00d1\" (China). It is beneficial to such applications as question answering, information extraction and so on if nested NE can be identified as well . In order to identify the nested PER, LOC in ORG 2 , we adopted class-based LMs for ORG further, in which there are three sub models, one is the class generative model, and the others are entity model: person name model and location name model in ORG. Therefore, the entity model of ORG is shown in Equation 6which is almost same as Equation (1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": ") | ] ... ([ 25* F V V 3 M HQG F VWDUW F M M = - - \u00fa \u00fa \u00fa \u00fa \u00fa \u00fb \u00f9 \u00ea \u00ea \u00ea \u00ea \u00ea \u00eb \u00e9 = = @ \u00fa \u00fa \u00fb \u00f9 \u00ea \u00ea \u00eb \u00e9 = = = = @ \u00d5 \u00d5 = - - = - - - - - - N L M L HQG F VWDUW F N L M L L L & M N HQG F VWDUW F M N & M HQG F VWDUW F M & 25* F F V V 3 25* F F F _ 3F 25* F F F V V 3 25* F F F 3 25* F & V V 3 F & 3 L L M M M M 1 ' ' ' 1 1 ' ' ) , ' | ] ... ([ max ) , ' ... ' | ] ... ([ ) | ' ... ' ( max )] , ' | ] ... ([ ) | ' ( [ max (6) where ' ... ' 1 ' N F F & =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "is the sequence of class corresponding to the Chinese character sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "In",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "addition, if M F is a normal word, 1 ) | ] ... ([ = - - M HQG F VWDUW F F V V 3 M M .",
"eq_num": "(7)"
}
],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "Based on the context model and entity models, we can compute the probability 3&_6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "2 For simplification, only nested person, location names are identified in organization. The nested person in location is not identified because of low frequency and can get the optimal class sequence The Chinese PER and transliterated PER share the same context class model when computing the probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&ODVVEDVHG/0 IRU1( ,GHQWLILFDWLRQ",
"sec_num": null
},
{
"text": "As discussed in 3.1.1, there are two kinds of probabilities to be estimated: P(C) and P(S|C) . Both probabilities are estimated using Maximum Likelihood Estimation (MLE) with the annotated training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "0RGHOV(VWLPDWLRQ",
"sec_num": null
},
{
"text": "The parser NLPWin 3 was used to tag the training corpus. As a result, the corpus was annotated with NE marks. Four lists were extracted from the annotated corpus and each list corresponds one NE class. The context model 3& was trained with the annotated corpus and the four entity models were trained with corresponding NE lists. The Figure 1 shows the training process. (Begin of sentence (BOS) and end of sentence (EOS) is added) Given a sequence of Chinese characters, the decoding process consists of the following three steps:",
"cite_spans": [],
"ref_spans": [
{
"start": 334,
"end": 342,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "0RGHOV(VWLPDWLRQ",
"sec_num": null
},
{
"text": "6WHS All possible word segmentations are generated using a Chinese lexicon containing 120,050 entries. The lexicon is only used for segmentation and there is no NE tag in it even if one word is PER, LOC or ORG. For example, \u00ebP (Beijing) is not tagged as LOC. 6WHS NE candidates are generated from any one or more segmented character strings and the corresponding generative prob ability for each candidate is computed using entity models described in Equation (4)-(7). 6WHS Viterbi search is used to select hypothesis with the highest probability as the best output. Furthermore, in order to identify nested named entities, two -pass Viterbi search is adopted. The inner Viterbi search is corresponding to Equation (6) and the outer one corresponding to Equation (1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "0RGHOV(VWLPDWLRQ",
"sec_num": null
},
{
"text": "After the two-pass searches, the word segmentation and the named entities (including nested ones) can be obtained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "0RGHOV(VWLPDWLRQ",
"sec_num": null
},
{
"text": "There are some problems with the framework of NE identification using the class-based LM. First, redundant candidates NEs are generated in the decoding process, which results in very large search space. The second problem is that data sparseness will seriously influence the performance. Finally, the abbreviation of NEs cannot be handled effectively. In the following three subsections, we provide solutions to the three problems mentioned above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ",PSURYHPHQW",
"sec_num": null
},
{
"text": "In order to overcome the redundant candidate generation problem, the heuristic information is introduced into the class-based LM. The following resources were used: (1) Chinese family name list, containing 373 entries (e.g. \u00f4 (Zhang), _ (Wang)); (2) transliterated name character list, containing 618 characters (e.g.( shi), 5 (dun)); and (3) ORG keyword list, containing 1,355 entries (e.g. \u00fb: (university), (\u00cc(corporation)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "+HXULVWLF,QIRUPDWLRQ",
"sec_num": null
},
{
"text": "The heuristic information is used to constrain the generation of NE candidates. For PER (PN), only PER candidates beginning with the family name is considered. For PER (FN), a candidate is generated only if all its composing character belongs to the transliterated name character list. For ORG, a candidate is excluded if it does not contain one ORG keyword.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "+HXULVWLF,QIRUPDWLRQ",
"sec_num": null
},
{
"text": "Here, we do not utilize the LOC keyword to generate LOC candidate because of the fact that many LOC do not end with keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "+HXULVWLF,QIRUPDWLRQ",
"sec_num": null
},
{
"text": "The cache entity model can address the data sparseness problem by adjusting the parameters continually as NE identification proceeds. The basic idea is to accumulate Chinese character or word n-gram so far appeared in the document and use them to create a local dynamic entity model such as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&DFKH0RGHO",
"sec_num": null
},
{
"text": ") | ( 1 - L L ELFDFKH Z Z 3 and ) ( L XQLFDFKH Z 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&DFKH0RGHO",
"sec_num": null
},
{
"text": ". We can interpolate the cache entity model with the static entity LM ) ..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&DFKH0RGHO",
"sec_num": null
},
{
"text": ". | ( 1 2 1 - - L L L VWDWLF Z Z Z Z 3 : Z Z Z _ Z 3 L L L FDFKH - - (8) ) .... | ( ) 1 ( ) | ( ) ( 1 1 2 1 1 2 1 - - - - + + = L L VWDWLF L L ELFDFKH L XQLFDFKH Z Z Z 3 Z Z 3 Z 3 O O O O where ] 1 , 0 [ , 2 1 \u00ce O O",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&DFKH0RGHO",
"sec_num": null
},
{
"text": "are interpolation weight that is determined on the held-out data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&DFKH0RGHO",
"sec_num": null
},
{
"text": "We found that many errors result from the occurrence of abbreviation of person, location, and organization. Therefore, different strategies are adopted to deal with abbreviations for different kinds of NEs. For PER, if Chinese surname is followed by the title, then this surname is tagged as PER. For example, \u00ba\u00f55 (President Zuo) is tagged as <PER>\u00ba</PER> \u00f55. For LOC, if at least two location abbreviations occur consecutive, the individual location abbreviation is tagged as LOC. For example, \u00b9 / \u00cf (Sino-Japan relation) is tagged as <LOC> </LOC><LOC>\u00b9</LOC> /\u00cf. For ORG, if organization abbreviation is followed by LOC, which is again followed by organization keyword, the three units are tagged as one ORG. For example, -\u00eb P \u00d6\u00a8(Chinese Communist Party Committee of Beijing) i s tagged as <ORG>-<LOC>\u00ebP</LOC> \u00d6 </ORG>. At present, we collected 112 organization abbreviations and 18 location abbreviations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "'HDOLQJZLWK$EEUHYLDWLRQ",
"sec_num": null
},
{
"text": "We conduct evaluations in terms of precision (P) and recall (R). We also used the F-measure, which is defined as a weighted combination of precision and recall as Equation 11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "([SHULPHQWV (YDOXDWLRQ0HWULF",
"sec_num": null
},
{
"text": "5 3 5 3 ) ++ = E E (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "([SHULPHQWV (YDOXDWLRQ0HWULF",
"sec_num": null
},
{
"text": "where E is the relative weight of precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "([SHULPHQWV (YDOXDWLRQ0HWULF",
"sec_num": null
},
{
"text": "There are two differences between MET evaluation and ours. First, we include nested NE in our evaluation whereas MET does not. Second, in our evaluation, only NEs with correct boundary and type label are considered the correct identifications. In MET, the evaluation is somewhat flexible. For example, a NE may be identified partially correctly if the label is correct but the boundary is wrongly detected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "([SHULPHQWV (YDOXDWLRQ0HWULF",
"sec_num": null
},
{
"text": "The training text corpus contains data from People's Daily (Jan.-Jun.1998). It contains 357,544 sentences (about 9,200,000 Chinese characters). This corpus includes 104,487 Chinese PER, 51,708 transliterated PER, 218,904 LOC, and 87,391 ORG. These data was obtained after this corpus was parsed with NLPWin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "'DWD6HWV",
"sec_num": null
},
{
"text": "We built the wide coverage test data according to the guidelines 4 that are just same as those of 1999 IEER. The test set (as shown in Table 2 ) contains half a million Chinese characters; it is a balanced test set covering 11 domains. The test set contains 11,844 sentences, 49.84% of the sentences contain at least one NE. The number of characters in NE accounts for 8.448% in all Chinese characters.",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 142,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "'DWD6HWV",
"sec_num": null
},
{
"text": "We can see that the test data is much larger than the MET test data and IEER data ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "'DWD6HWV",
"sec_num": null
},
{
"text": "The training data produced by NLPWin has some noise due to two reasons. First, the NE guideline used by NLPWin is different from the one we used. For example, in NLPWin, \u00eb P\u00d6(Beijing City) is tagged as <LOC>\u00ebP </LOC> \u00d6, whereas \u00ebP\u00d6 should be LOC in our definition. Second, there are some errors in NLPWin results. We utilized 18 rules to correct the frequent errors. The following shows some examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "7UDLQLQJ'DWD3UHSDUDWLRQ",
"sec_num": null
},
{
"text": "The Table 4 shows the quality of our training corpus. ",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 11,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "7UDLQLQJ'DWD3UHSDUDWLRQ",
"sec_num": null
},
{
"text": "We conduct incrementally the following f our experiments:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "([SHULPHQWV",
"sec_num": null
},
{
"text": "(1) Class-based LM, we view the results as baseline performance; (2) Integrating heuristic information into (1);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "([SHULPHQWV",
"sec_num": null
},
{
"text": "(3) Integrating Cache-based LM with (2); (4) Integrating NE abbreviation processing with (3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "([SHULPHQWV",
"sec_num": null
},
{
"text": "/1/RFDWLRQ.H\\ : /1 /1O /1Z",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "([SHULPHQWV",
"sec_num": null
},
{
"text": ":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "([SHULPHQWV",
"sec_num": null
},
{
"text": "21 _>_\u00c5_\u00a9\u00ab \u00d1 : /1 &ODVVEDVHG/0%DVHOLQH",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "([SHULPHQWV",
"sec_num": null
},
{
"text": "Based on the basic class-based models estimated with the training data, we can get the baseline performance, as is shown in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "([SHULPHQWV",
"sec_num": null
},
{
"text": "Comparing Table 4 and Table 5 , we found that the performance of baseline is better than the quality of training data. ,QWHJUDWLQJ+HXULVWLF,QIRUPDWLRQ",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 17,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 22,
"end": 29,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "([SHULPHQWV",
"sec_num": null
},
{
"text": "In this part, we want to see the effects of using heuristic information. The results are shown in Table 6 . In experiments, we found that by integrating the heuristic information, we not only achieved more efficient decoding, but also obtained higher NE identification precision. For example, the precision of PER increases from 65.70% to 77.63%, and precision of ORG increases from 56.55% to 81.23%. The reason is that adopting heuristic information reduces the noise influence. However, we noticed that the recall of PER and LOC decreased a bit. There are two reasons. First, organization names without organization ending keywords were not marked as ORG. Second, Chinese names without surnames were also missed. ,QWHJUDWLQJ&DFKHEDVHG/0 Table 7 shows the evaluation results after cache-based LM was integrated. From Table 6 and Table 7 , we found that almost all the precision and recall of PER, LOC, ORG have obtained slight improvements. ,QWHJUDWLQJ ZLWK 1( $EEUHYLDWLRQ",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 105,
"text": "Table 6",
"ref_id": "TABREF6"
},
{
"start": 739,
"end": 746,
"text": "Table 7",
"ref_id": "TABREF7"
},
{
"start": 818,
"end": 825,
"text": "Table 6",
"ref_id": "TABREF6"
},
{
"start": 830,
"end": 837,
"text": "Table 7",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "([SHULPHQWV",
"sec_num": null
},
{
"text": "In this experiment, we integrated with NE abbreviation processing. As shown in Table 8 , the experiment result indicates that the recall of PER, LOC, ORG increased from 82.06%, 81.27%, 36.65% to 87.29%, 82.46%, 56.54%, respectively. ",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 8",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "3URFHVVLQJ",
"sec_num": null
},
{
"text": "From above data, we observed that (1) the class based SLM performs better than the training data automatically produced with the parser; (2) the distinct improvements is achieved by using heuristic information;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6XPPDU\\",
"sec_num": null
},
{
"text": "(3) Furthermore, our method of dealing with abbreviation increases the recall of NEs. In addition, the cache-based LM increases the performance not so much. The reason is as follows: The cache-based LM is based on the hypothesis that a word used in the recent past is much likely either to be used soon than its overall frequency in the language or a 3 -gram model would suggest (Kuhn, 1990) . However, we found that the same NE often vari es its morpheme in the same document. For example, the same NE -\u00eb P \u00d6\u00a8(Chinese Communist Party Committee of Beijing),\u00ebP \u00d6\u00a8(Committee of Beijing City), \u00d6( Committee) occur in order.",
"cite_spans": [
{
"start": 379,
"end": 391,
"text": "(Kuhn, 1990)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "6XPPDU\\",
"sec_num": null
},
{
"text": "Furthermore, we notice that the segmentation dictionary has an important impact on the performance of NE identification. We do not think it is better if more words are added into dictionary. For example, because \u00d1(Chinese) is in our dictionary, there is much possibility that \u00d1 (China) in \u00d1 is missed identified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6XPPDU\\",
"sec_num": null
},
{
"text": "We also evaluated on the MET2 test data and IEER test data. The results are shown in Table 9 . The results on MET2 are lower than the highest report of MUC7 (PER: Precision 66%, Recall 92%; LOC: Precision 89%, Recall 91%; ORG: Precision 89%, Recall 88%, http://www.itl.nist.gov). We speculate the reasons for this in the following. The main reason is that our class-based LM was estimated with a general domain corpus, which is quite different from the domain of MUC data. Moreover, we didn't use a NE dictionary. Another reason is that our NE definitions are slightly different from MET2. ",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 93,
"text": "Table 9",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "(YDOXDWLRQZLWK0(7DQG ,((57HVW'DWD",
"sec_num": null
},
{
"text": "In this research, Chinese word segmentation and NE identification has been integrated into a framework using class-based language models (LM). We adopted a hierarchical structure in ORG model so that the nested entities in organization names can be identified. Another characteristic is that our NE identification do not utilize NE dictionary when decoding. The evaluation on a large test set shows consistent improvements. The integration of heuristic information improves the precision and recall of our system. The cache-based LM increases the recall of NE identification to some extent. Moreover, some rules dealing with abbreviations of NEs have increased dramatically the performance. The precision of PER, LOC, ORG on the test set is 79.86%, 80.88%, 76.63%, respectively; and the recall is 87.29%, 82.46%, 56.54%, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&RQFOXVLRQV )XWXUHZRUN",
"sec_num": null
},
{
"text": "In our future work, we will be focusing more on NE coreference using language model. Second, we intend to extend our model to include the part-of-speech tagging model to improve the performance. At present, the class-based LM is based on the general domain and we may need to fine-tune the model for a specific domain. ACKNOWLEDGEMENT I would like to thank Ming Zhou, Jianfeng Gao, Changning Huang, Andi Wu, Hang Li and other colleagues from Microsoft Research for their help. And I want to thank especially Lei Zhang from Tsinghua University for his help in developing the ideas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "&RQFOXVLRQV )XWXUHZRUN",
"sec_num": null
},
{
"text": "NLPWin system is a natural language processing system developed by Microsoft Research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The difference between IEER's guidelines and ours is that the nested person and location name in organization are tagged in our guidelines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Maximum Entropy Approach to Named Entity Recognition",
"authors": [
{
"first": "",
"middle": [
"A"
],
"last": "Borthwick",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Borthwick. A. (1999) A Maximum Entropy Approach to Named Entity Recognition. PhD Dissertation",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An algorithm that learns what's in a name",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bikel",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwarta",
"suffix": ""
},
{
"first": "",
"middle": [
"R"
],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1997,
"venue": "Machine Learning",
"volume": "34",
"issue": "",
"pages": "211--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bikel D., Schwarta R., Weischedel. R. (1997) An algorithm that learns what's in a name. Machine Learning 34, pp. 211-231",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Dellapietra",
"suffix": ""
},
{
"first": "P",
"middle": [
"V"
],
"last": "Desouza",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Lai",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "4",
"pages": "468--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, P. F., DellaPietra, V. J., deSouza, P. V., Lai, J. C., and Mercer, R. L. (1992). Class-based n-gram models of natural language. Computational Linguistics, 18(4):468--479.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "MUC-7 Named Entity Task Definition Version 3.5. Available by from ftp",
"authors": [
{
"first": "",
"middle": [
"N"
],
"last": "Chinchor",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinchor. N. (1997) MUC-7 Named Entity Task Definition Version 3.5. Available by from ftp.muc.saic.com/pub/MUC/MUC7-guidelines",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Description of the NTU System Used for MET2",
"authors": [
{
"first": "H",
"middle": [
"H"
],
"last": "Chen",
"suffix": ""
},
{
"first": "Y",
"middle": [
"W"
],
"last": "Ding",
"suffix": ""
},
{
"first": "S",
"middle": [
"C"
],
"last": "Tsai",
"suffix": ""
},
{
"first": "G",
"middle": [
"W"
],
"last": "Bian",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen H.H., Ding Y.W., Tsai S.C. and Bian G.W. (1997) Description of the NTU System Used for MET2",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Toward a unified Approach to Statistical Language Modeling for Chinese",
"authors": [
{
"first": "J",
"middle": [
"F"
],
"last": "Gao",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "M",
"middle": [
"J"
],
"last": "Li",
"suffix": ""
},
{
"first": "K",
"middle": [
"F"
],
"last": "Lee",
"suffix": ""
}
],
"year": 2001,
"venue": "IEEE Transaction on Pattern Analysis and Machine Intelligence",
"volume": "12",
"issue": "6",
"pages": "570--583",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao J.F., Goodman J., Li M.J., Lee K.F. (2001) Toward a unified Approach to Statistical Language Modeling for Chinese. To appear in ACM Transaction on Asian Language Processing Kuhn R., Mori. R.D. (1990) A Cache-Based Natural Language Model for Speech Recognition. IEEE Transaction on Pattern Analysis and Machine Intelligence.Vol.12. No. 6. pp 570-583",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Description of the LTG System Used",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mikheev",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikheev A., Grover C. and Moens M. (1997) Description of the LTG System Used for MUC-7",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A decision tree method for finding and classifying names in Japanese texts",
"authors": [
{
"first": "S",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Shinou",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Sixth Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sekine S., Grishman R. and Shinou H. (1998), \"A decision tree method for finding and classifying names in Japanese texts\", Proceedings of the Sixth Workshop on Very Large Corpora, Canada",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Description of the Kent Ridge Digital Labs System Used",
"authors": [
{
"first": "S",
"middle": [
"H"
],
"last": "Yu",
"suffix": ""
},
{
"first": "S",
"middle": [
"H"
],
"last": "Bai",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu S.H., Bai S.H. and Wu P. (1997) Description of the Kent Ridge Digital Labs System Used for MUC-7",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Study on Chinese Proofreading Oriented Language Modeling",
"authors": [
{
"first": "L",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang L. (2001) Study on Chinese Proofreading Oriented Language Modeling, PhD Dissertation",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Example of Training Process'HFRGHU",
"type_str": "figure"
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>ID</td><td>Domain</td><td colspan=\"3\">Number of NE Tokens</td><td>Size</td></tr><tr><td/><td/><td colspan=\"2\">PER LOC</td><td>ORG</td><td>(byte)</td></tr><tr><td>1</td><td>Army</td><td>65</td><td>202</td><td>25</td><td>19k</td></tr><tr><td>2</td><td>Computer</td><td>75</td><td>156</td><td>171</td><td>59k</td></tr><tr><td>3</td><td>Culture</td><td>548</td><td>639</td><td>85</td><td>138k</td></tr><tr><td>4</td><td>Economy</td><td>160</td><td>824</td><td>363</td><td>108k</td></tr><tr><td>5</td><td>Entertainment</td><td>672</td><td>575</td><td>139</td><td>104k</td></tr><tr><td>6</td><td>Literature</td><td>464</td><td>707</td><td>122</td><td>96k</td></tr><tr><td>7</td><td>Nation</td><td>448</td><td>1193</td><td>250</td><td>101k</td></tr><tr><td>8</td><td>People</td><td>1147</td><td>912</td><td>403</td><td>116k</td></tr><tr><td>9</td><td>Politics</td><td>525</td><td>1148</td><td>218</td><td>122k</td></tr><tr><td>10</td><td>Science</td><td>155</td><td>204</td><td>87</td><td>60k</td></tr><tr><td>11</td><td>Sports</td><td>743</td><td>1198</td><td>628</td><td>114k</td></tr><tr><td/><td>Total</td><td colspan=\"2\">5002 7758</td><td colspan=\"2\">2491 1037k</td></tr></table>",
"num": null,
"text": "7DEOH: Statistics of Open-Test"
},
"TABREF4": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Quality of Training Corpus</td><td/></tr><tr><td>NE</td><td>P (%)</td><td>R (%)</td><td>F (%)</td></tr><tr><td>PER</td><td>61.05</td><td>75.26</td><td>67.42</td></tr><tr><td>LOC</td><td>78.14</td><td>71.57</td><td>74.71</td></tr><tr><td>ORG</td><td>68.29</td><td>31.50</td><td>43.11</td></tr><tr><td>Total</td><td>70.07</td><td>66.08</td><td>68.02</td></tr></table>",
"num": null,
"text": ""
},
"TABREF5": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Baseline Performance</td><td/></tr><tr><td>NE</td><td>P (%)</td><td>R (%)</td><td>F (%)</td></tr><tr><td>PER</td><td>65.70</td><td>84.37</td><td>73.87</td></tr><tr><td>LOC</td><td>82.73</td><td>76.03</td><td>79.24</td></tr><tr><td>ORG</td><td>56.55</td><td>38.56</td><td>45.86</td></tr><tr><td>Total</td><td>72.61</td><td>72.44</td><td>72.53</td></tr></table>",
"num": null,
"text": ""
},
"TABREF6": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\">Results of Heuristic Information Integrated</td></tr><tr><td colspan=\"2\">into the Class-based LM</td><td/><td/></tr><tr><td>NE</td><td>P (%)</td><td>R (%)</td><td>F (%)</td></tr><tr><td>PER</td><td>77.63</td><td>80.89</td><td>79.23</td></tr><tr><td>LOC</td><td>80.05</td><td>80.80</td><td>80.42</td></tr><tr><td>ORG</td><td>81.23</td><td>36.65</td><td>50.51</td></tr><tr><td>Total</td><td>79.26</td><td>73.41</td><td>76.23</td></tr></table>",
"num": null,
"text": ""
},
"TABREF7": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Results of our system</td><td/></tr><tr><td>NE</td><td>P (%)</td><td>R (%)</td><td>F (%)</td></tr><tr><td>PER</td><td>79.12</td><td>82.06</td><td>80.57</td></tr><tr><td>LOC</td><td>80.11</td><td>81.27</td><td>80.69</td></tr><tr><td>ORG</td><td>79.71</td><td>39.89</td><td>53.17</td></tr><tr><td>Total</td><td>79.72</td><td>74.58</td><td>77.06</td></tr></table>",
"num": null,
"text": ""
},
"TABREF8": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Results of our system</td><td/></tr><tr><td>NE</td><td>P (%)</td><td>R (%)</td><td>F (%)</td></tr><tr><td>PER</td><td>79.86</td><td>87.29</td><td>83.41</td></tr><tr><td>LOC</td><td>80.88</td><td>82.46</td><td>81.66</td></tr><tr><td>ORG</td><td>76.63</td><td>56.54</td><td>65.07</td></tr><tr><td>Total</td><td>79.99</td><td>79.68</td><td>79.83</td></tr></table>",
"num": null,
"text": ""
},
"TABREF9": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>NE</td><td colspan=\"2\">MET2 Data</td><td/><td colspan=\"2\">IEER Data</td></tr><tr><td/><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td></tr><tr><td/><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td><td>(%)</td></tr><tr><td>PER</td><td>65.86</td><td/><td/><td/><td/></tr></table>",
"num": null,
"text": "Results on MET2 and IEER 94.25 77.54 79.38 84.43 81.83 LOC 77.42 89.60 83.07 79.09 80.18 79.63 ORG 88.47 75.33 81.38 88.03 62.30 72.96 Total 77.89 86.09 81.79 80.82 76.78 78.75"
}
}
}
}