|
{ |
|
"paper_id": "C92-1029", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:34:22.091154Z" |
|
}, |
|
"title": "Dynamic Programming Method for Analyzing Conjunctive Structures in Japanese", |
|
"authors": [ |
|
{ |
|
"first": "Sadao", |
|
"middle": [], |
|
"last": "Kurohashi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Kyoto University Yoshida-honmachi", |
|
"location": { |
|
"postCode": "606", |
|
"settlement": "Sakyo, Kyoto", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Nagao", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Kyoto University Yoshida-honmachi", |
|
"location": { |
|
"postCode": "606", |
|
"settlement": "Sakyo, Kyoto", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Parsing a long sentence is very difficult, since long sentences often have conjunctions which result in ambiguities. If the conjunctive structures existing in a long sentence can be analyzed correctly, ambiguities can be reduced greatly and a sentence can be parsed in a high successful rate. Since the prior part and the posterior part of a conjunctive structure have a similar structure very often, finding two similar series of words is an essential point in solving this problem. Similarities of all pairs of words are calculated and then the two series of words which have the greatest sum of similarities are found by a technique of dynamic programming. We deal with not only conjunctive noun phrases, but also conjunctive predicative clauses created by \"Renyoh chuushi-ho\". We will illustrate the effectiveness of this method by the analysis of 180 long Japanese sentences.", |
|
"pdf_parse": { |
|
"paper_id": "C92-1029", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Parsing a long sentence is very difficult, since long sentences often have conjunctions which result in ambiguities. If the conjunctive structures existing in a long sentence can be analyzed correctly, ambiguities can be reduced greatly and a sentence can be parsed in a high successful rate. Since the prior part and the posterior part of a conjunctive structure have a similar structure very often, finding two similar series of words is an essential point in solving this problem. Similarities of all pairs of words are calculated and then the two series of words which have the greatest sum of similarities are found by a technique of dynamic programming. We deal with not only conjunctive noun phrases, but also conjunctive predicative clauses created by \"Renyoh chuushi-ho\". We will illustrate the effectiveness of this method by the analysis of 180 long Japanese sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Analysis of a long Japanese sentence is one of many difficult problems which cannot be solved by the continuing efforts of many researchers and remain abaudoned. It is difficult to get a proper analysis of a sentence whose length is more than fifty Japanese characters, and almost all the analyses fail for sentences composed of more than eighty characters. To clarify why it is is also very difficult because there are varieties of reasons for the failures. People sometimes say that there are so many possibilities of modifier/modifyee relations between phrases in a long sentence. But no deeper consideration has ever been given for the reasons of the analysis failure. Analysis failure here means not only that no correct analysis is included in the multiple analysis results which are caused by the intrinsic ambiguity of a sentence and also by inaccurate grammatical rules, but also that the analysis fails in the middle of the analysis pro-re88,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We have been claiming that many (more than two) linguistic components are to be seen at the same time in a sentence for proper parsing, and also that tree to tree transformation is necessary for reliable analysis of a sentence. Popular grammar rules which merge two linguistic components into one are quite insufficient to describe the delicate relationships among components ill a long sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Language is complex. There often happens that components whicb are far apart in a long sentence cooccur, or have certain relationships. Such relations may be sometimes purely semantic, but often they are grammatical or structural, although they are not definite but very subtle.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A long sentence, particularly of Japanese, contains parallel structures very often. They are either conjunctive noun phrases, or conjunctive predicative clauses. The latter is called \"Renyoh chuushiho\". They appear in an embedded sentence to modify nouns, and also are used to connect two or more sentences. This form is very often used in Japanese, and is a main cause for structural ambiguity. Many major sentential components are omitted in the posterior part of Renyoh chuushi expressions and this makes the analysis more difficult.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For tbc successful analysis of a long Japanese sentence, these parallel phrases and clauses, including Renyoh chuushi-ho, must be recognized correctly. This is a key point, and this must be achieved by a completely different method from the ordinary syntactic analysis methods, because they generally fail in the analysis for a long sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We have introduced au assumption that these parallel phrases/clauses have a certain similarity, and have developed an algorithm which finds out a most plausible two series of words which can be considered parallel by calculating a similarity measure of two arbitrary series of words. This is realized by using the dynamic programming method. The results was exceedingly good. We achieved the score of about 80% in the detection of various types of parallel series of words in long Japanese sentences. The second type is conjunctive predicative clauses, ill which two or more itredicates ~ arc in a sentence forming a coordination.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We call find these clauses by the ll,enyoh-lbrnts ~ of predicates (Renyoh ehuushi-ho: Table 2(iv)) or by tile predi. cares accompanying one of the words in Table l ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 163, |
|
"text": "Table l", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(b) ('rable 2(v)),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "'['he. third t.ype is CSs consisl.ing of parts of conjtmctire predicatiw~ clauses. We call this type eonjunetlve incomplete structures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We can find these structures by the correspondence of p(xstpositional particles (Table 2(vi)) or by the words in Table l(e) which indicate CSs explicitly ( Table 2(vii)) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 169, |
|
"text": "Table 2(vii))", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "l,br all of these types, it is relatively easy to tind the existence of a CS by detecting a distinctive key bmlsetsu a (we call this bunsetsu 'KB') which accompanies these words explained above. KB lies last in the prior part of at CS, but it is difficult to deter mine which bunsetsu sequences on both side of tile KB constitute a CS. That is, it is not easy to determine which Imnsetsu to tile hfft of a KII is tile leftmost element of the prior part of a CS, and which bunsetsu to the. right of a Kil is tile rightmost element of the posterior part of a US. The bunsetsus betweeu these two extreme elements constitute the scope of the CS. Particularly in detecting this scope of a CS, it is essential to find out the last Imnsetsn in the posterior part of the CS, which corresponds to the KB. q'here art'. lnany candidates for it ill a seatence; e.g., ill a conjunctive noun i)hras~ all nouns after it KII are the candidates. We call snch it candidate bunsetsu '(211'. It is almost impossible to solve this problem merely by using rules based oil phra.se structure grammar. lilt addition to verbs tutti aAjectives~ assertive words (kinds of postpositioxm) \" /d\"(da), \"q2ab5 \"(dearu), \"e-J-\"(desu) and so on, which follow directly after nouus, cm~ be predicate it| d*tl>ltllese.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "~'fhe ending foritls of inflectional words which c;m modify vet|>, ~tdjective, or a~ertivc word au~ c-tiled I~e/lyoh-fornl in .1 apanese. 3 ]~utmetuu is tile Slllgtllet~t ineanhlgful block tx|nsisting of *tit indelxmdcnt word (lW; tmuns, verbs, adjectives, etc.) and aCCOlttpau~yittg word~ (AW; l),xslp~sitio|lal pgu'ticles, &uxiliguy verbs, etc.). ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We detect the scope of CSs by using wide range of information around it KB. 4 An input sentence is first divided into bunsetsus by tile conventional morphological analysis. Then we calculate similarities of all pairs of ~)unsetsus ill a selltence, and calculate a sum of similarities between a series of bunsetsus on the left of a KII and a series of bunsetsus on the left of a CB. Of all the pairs of the two series of Imnsetsus, the pair which has the greatest sum of similarities is determined as the scope of the CS. We wilt explain tins process in detail in the following.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structures", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An appropriate similarity value between bunsetsus is given by the following process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities between Bunsetsus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 If the parts of speech of IWs (independent words) are equal, giw~ 2_j>oints as the similarity values. Then go to the next stage and add further the following I)oints.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities between Bunsetsus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "1. If IWs match exactly (by character level) each other, add 10 points and skip the next two steps and go to tile step 4. IflWs are inflected, infinitives are compared. 3. Add points for semantic similarities by using the thesaurus 'Buurui Goi Ityou' (BGH) [3] . BGH has the six layer abstraction hierarchy and more than 60,000 words are assigned to the leaves of it. If the most specific common layer between two IWs is the k-th layer and if k is greater than 2, add (k -2) \u00d7 2 points. If either or both IWs are not contained in BGH, no addition is made. Matching of the generic two layers are ignored to prevent too vague matching in broader sense.", |
|
"cite_spans": [ |
|
{ |
|
"start": 257, |
|
"end": 260, |
|
"text": "[3]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities between Bunsetsus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "4. If some of AWs (accompanying words) matcb, add the number of matchin$ AWs x 3 points.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities between Bunsetsus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Maximum sum of the similarity values which can be added by the steps 2 and 3 above is limited to 10 points.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities between Bunsetsus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Although the parts of speech oflWs are not equal, give 2_.points if both bunsetsus can be predicate (see footnote 1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities between Bunsetsus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For example, the similarity point between \"~ ~Pi~ (low level language) +,\" and \" ~lt\u00a2,~'~ (high level language) + ~ (and)\" is calculated as 2(match of parts of speech) + 8(match of four characters: Y~l/t~ ~) = 10 points. The point between \" ~]'aq~ (revision) + L (do) +,\" and \"l~U3(deteetion) +'J-~ (do)\" is 2(match of parts of speech) + 2(match by BGII) + 3(match of one AWs) -7 points.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities between Bunsetsus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our method detects the scope ofa CS by two series of bunaetsus which have the greatest similarity. These two aeries of bunsetsus are searched for on a triangular matrix A = (a(i,j)) ( Figure 1 ), whose diagonal element a(i,i) is the i-th bunsetsu in a sentence and whose element a(i,j) (i < j) is the similarity value between bunsetsu a(i,i) and bunsetsu a(j,j).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 192, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Similarities between Two Series of Bunsetsus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We call the rectangular matrix A' a partial matrix, where A' =(a(i,j)) (O< i< n; n+ l < j <1) The starting element of a path shows the correspondence of a KB to a CB. A path has only one element from eacb column and extends towards the upper left. We calculate the similarity between tbe series of bunsetsus on the left side of the path (sbl in Figure 1 ) and the series under the path (sb2 in Figure 1 ) as a path score by the following four criteria:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 345, |
|
"end": 354, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 395, |
|
"end": 403, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Similarities between Two Series of Bunsetsus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "1. Basically the score of a path is tile sum of each element's points on the path. But if a part of the path is horizontal (a(i,j),a(i,j -1)) as shown in Figure 2 , which leads the bunsetsu correspondence of one element a(i, i) to two elements a(j-1, j-1) and a(j,j), the element's points a(i,j -1) is not added to the path score.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 162, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Similarities between Two Series of Bunsetsus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "2. Since a pair of conjunctive phrases/clanses often appear ~s a similar structure, it is likely that both cmdunctive phrases/clauses contain nearly the same numbers of bunsetsus. Therefore, we impose penalty points on the pair of elements in the path which causes the one-to-plural bunsetsu correspondence so as to give a priority to the CS of the same size. Penalty point for 'Fable 3: Separating levels (SLs).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities between Two Series of Bunsetsus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "~: ~ co,,aitio, _, ~o ~,,:,~ot~o", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities between Two Series of Bunsetsus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "5 --i --5 2 --1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities between Two Series of Bunsetsus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Being the KB of a conjunctive predicative clause, or accompanying a topi(:~ntarking postpositional particle ~' I~ \" all(I comma. Accompanying a postpositional particle not (:resting a conjunctive nolul phrase and conlllla, or being an adverb aCColnpanyillg conlnla.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities between Two Series of Bunsetsus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Being the Renyoh-[orm of a predicate which does 1|o~ ~tccolnp~l/y conllna~ or accolnpanyil|g tt topicmarking postpositionM particle \" t.t \", Being the KB of a conjunctive noun phrase accompanyillg COlllnla, Accompanying a comma, or being the KB of a conjullctive IlOllll phrase not aCcolnparlying colnlna.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities between Two Series of Bunsetsus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(a(pl,j),a(pi+~,j -1)) is calculated by the for mule ( Figure 3 ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 55, |
|
"end": 63, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Similarities between Two Series of Bunsetsus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "), [p, -pi+x -11 X 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities between Two Series of Bunsetsus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Tim penalt,y points are subtracted from the path score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities between Two Series of Bunsetsus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "3. Since each phrase in the CS has a certain cOherency of meaning, speciM words which separate the meaning in a sentence often limit the scope of a CS. If a path includes such words, we impose penalty points on the path so that the fmssihility of including those are reduced. We define five 'separating-levels' (SLs) for hunsetsus, which express the strength of separating a sentence meaning (Table 3 , of. Tahle 1). If bunsetsus on the left side of the path ~md under it include a bunsetsu whose SL is equal to KB's SI, or higher than it, we reduce the path score by (SL of the hunsetsu -KB's SL + 1) x 7.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 392, |
|
"end": 400, |
|
"text": "(Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Similarities between Two Series of Bunsetsus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "ltowever, two high SL bunsetsus corresponding to each other ofteu exist in a CS, and those do not limit the scope of the CS. For example, topicmarking postpositional particles correspond each other in the following sentential style, Therefore, when two high SL bunsetsus correspond in a CS, that is, the path includes the element which indicates the similarity of them, and those are the 'same-type', the penalty points on them arc not axlded to tile path score. We define thc same-type bunsetsus ~LS two bunsetsus which satisfy the following two conditions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities between Two Series of Bunsetsus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 IWs of them are of the same part of speech, and they have the identical inflection whcn they arc inflectional words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarities between Two Series of Bunsetsus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 AWs of them arc identical. . Some words frequently become tile AW of the last bunsetsu in a CS or the IW following it. These words thus signal the end of the CS. Such words are shown in Table 4 , Bonus points (6 points) are given to the path which indicates the CS ending with one of the words in Table 4 , as that path shouhl he preferred.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 195, |
|
"text": "Table 4", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 299, |
|
"end": 306, |
|
"text": "Table 4", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Similarities between Two Series of Bunsetsus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "As for each non-zero element in the lowest row ill a partial matrix A' in Figure l, we search for tile best path from it which has the greatest path score by a technique of the dynamic programming. Calculation is performed cohuun by columu in the left direction from a non-zero element. For each elenmnt in a col.. umn, the hast partial path including it in found by extending the partial paths from the previous cohmm and by choosing the path with the greatest score. Then among the paths to the leftmost column, the path which ha.s the greatest score becomes the best path from the non-zero element (Figure 4 ). Of all the best paths from non-zero elements, the path which have the maximum path score defines the scope of bhe CS; i.e., the series of bunsetsus on tim left side of the maximum path and the series of bunsetsus under it are conjunctive ( Figure 5 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 601, |
|
"end": 610, |
|
"text": "(Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 854, |
|
"end": 862, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Finding the Conjunctive Structure Scope", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We illustrate the effectiveness of our method by the analysis of 180 Japanese sentences. 60 sentences which are longer aud more complex than the average sentences are collected from each of the following three sources; Encyclopedic Dictionary of Computer Science (EDCS) published by lwanami Publishing Co., Abstracts of papers of Japan Information Center of Science and Technology (JICST), and popular science journal, \"Science\", translated into Japanese (Vol.17,No.12 \"Advanced Computing for Science\"). Each group of 60 sentences consists of 20 sentences from 30 to 50 characters, 20 sentences from 50 to 80 characters, and 20 sentences over 80 characters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Discussion", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As described in the preceding sections, many factors have effects on the analysis of CSs, and it is very important to adjust the weights for each factor. The method of calculating the path score was adjusted during the experiments on 30 sentences out of 60 sentences from EDCS. Then the other 150 sentences are analyzed by these parameters. As the analyses were successful as shown in the following, this method can be regarded as properly representing the balanced weights on each factor. This method defines where the CS ends, that is, which bunsetsu corresponds to the KB. However, as for conjunctive noun phrases containing clause modifiers or conjunctive predicative clauses, it is almost impossible to find out exactly where the CS starts, because mm~y bunsetsus which modify right-hand bunsetsus exist in each part of the CSs and usnally they do not correspond exactly. Thus it is necessary to revise the starting position of the CS obtained by this method. We treat the actual prior part of a CS as extending to bunsetsus which modify a bunsetsu in the prior part of it obtained by this method, unless they contain comma or topic-marking postpositional particle \" #2 \"(ha).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Discussion", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Examples of correct analysis are shown in Figure 6 -8. The revisions of CS scopes are shown in notes of each figure. Chains of alphabet symbols attached to matrix elements show the maximum path concerning the KB marked by the same alphabet and '>'. In the case of example(a) in Figure 6 , the conjunctive noun phrase, in which eight nouns are conjuncted (chains of %', 'b', ... 'g'), is analyzed rightly thanks to the penalty points by SLs of every comma between nouns. Thus, the CS consisting of more than two It k a kind of ~c te~ce which analyz~ the e,seats ~uui tmatre related to info~ttation'$ occmrence, collection, systematiz~o~, ~afi~ t retrieval, uaderstendia,8, c,. commtmicmtlon, and application, tad so on, Lad inv~tigat~ social tdaplability of the clarified mass*. parts is expressed by tile repetition of the combination of CSs consisting of two parts, in this example, also the conjunctive predicative clause is analyzed rightly (chains of 'h').", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 50, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF8" |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 286, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Examples of Correct Analysis", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In the case of example(b) in Figure 7 , the CS which consists of three noun phrases containing modifier clauses is detected as tile combination of the two consecutive CSs like example(a) (chain of 'a' and 'b').", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 37, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Examples of Correct Analysis", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In tile case of example(c) ill Figure 8 , the conjunctive noun phrase and the conjunctive predicative clause containing it is analyzed rightly. In this example, the successful analysis is due to the penalty points by SL of the topic-marking postpositional particle \" \" in \"~ff~g~l~rl~t (a computational e~:periment)\"", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 39, |
|
"text": "Figure 8", |
|
"ref_id": "FIGREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Examples of Correct Analysis", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "and \",~1~ ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples of Correct Analysis", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We evaluated tile analysis result of 180 Japanese sentences by hand. The results of cvaluatlug every sentence by each CS type are shown in Table 5 . If tile same typc CSs exist two or more ill a sentence, the analysis is regarded as a success only when all of them are analyzed rightly.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 139, |
|
"end": 146, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experinaental Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "There arc 144 conjunctive noun phrases ill 180 sentences, and ll9 phrases among them are analyzed rightly. Tbe success ratio is 83%. There are 118 conjunctive predicative clauses ill 180 sentences, and 94 clauses among them are analyzed rightly. The success ratio is 80%. There are 3 pairs of the conjunctive incomplete structures, and all of them are analyzed rightly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experinaental Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As showu in ]'able 5, the sucecss rate for tile Selltences from J1CST abstracts arc worse than that of the sentences from other sources. The reason for the failures is that tile sentences are often very ambiguous and confusing even for a lluman because they have too many contents in a sentence to satisfy the limitation of tile docnment size.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experinaental Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "and Solutions for Them", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples of Incorrect Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Wc give examples of failure of analysis (Table 6 , Figurc 9) , and indicate st)lutions for them. In Table 6 , underlined parts show the KBs, I-...d shows tile wrongly analyzed scope, and r ... j shows the right scope.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 48, |
|
"text": "(Table 6", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 51, |
|
"end": 60, |
|
"text": "Figurc 9)", |
|
"ref_id": "FIGREF12" |
|
}, |
|
{ |
|
"start": 100, |
|
"end": 107, |
|
"text": "Table 6", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Examples of Incorrect Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 It is essential ill this method to define the appropriate similarity between words. Thus changing the sinlilarity points for more detailed groups of parts of speech (e,g. nouns call be divided into ilul~lerals~ proper nonns, conlmon nouns, and action nouns which becomc verbs by the combiuation with \" ~-~ (do)\") can improve the accuracy of the anMysis. For example, the example(i) in 'Fable 6 may bc analyzed rightly if the similarity points between action noun \"t1~[~ (extension)\" and action noun \"t~'f (maintenance)\" is greater than that between action noun \" t1~ (extension)\" and common noun \" ~1~ (di~cully)\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples of Incorrect Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 Semantic similarities between words are currently calculated only by using BOIl which do not contain technical terms. If tile sinfilarity points between technical terms can be given by thesaurus, tile accuracy of tile analysis will be improved. Example(ii) will be analyzed rightly if greater points are given to tile similarity between \" T P \"T 4 7\". -k 4.--b ~f~'~ ( Actlve Chart Parsing)\" and \"llPSG( Head-drtve, Phrase Structure Grami lly the additional usage of relatively simple syntactic conditions, some sentences which are analyzed wrongly by this method will be analyzed rightly. For example, because Japanese modifier/modifyee relations, inchnling the relation between a verb and its case frame elements, do not erc~s each other, the modifier/modifyee relations in nmm phrases and predicative clauses do not spread beyond each phrase or clause, except the relation concerning the last bunsetsu of them. This condition is not satisfied by the analyzed CS in the example(ill) whose prior noun phrase contains no verb related with the case frame element \" [-~111.[O (of rules) [~t~-af , (extension and) {~ ,'~a9 (of mainte-.ance) can be the prior part of the CS. We are planning to do such a correction in the next stage of the syntactic analysis, which analyzes all modifier/modifyee relations in a sentence using the CS scopes detected by this method.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1066, |
|
"end": 1139, |
|
"text": "[-~111.[O (of rules) [~t~-af , (extension and) {~ ,'~a9 (of mainte-.ance)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples of Incorrect Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 in example(iv), the KB in the beginning part of a sentence corresponds to the last CB. That is, a short part of a sentence corresponds to the following long part. It is very difficult to analyze such an extremely unbalanced CS because this method gives a priority to similar CSs. In order to analyze example(iv) the causal relationship between \"~1~-9\"C (usiug)\" and \"~tr~'J~z~ (create)\" will be necessary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples of Incorrect Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 Some sentences analyzed incorrectly are too subtle even for a human to find the right CSs. Exampie(v) cannot be analyzed rightly without expert knowledge.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples of Incorrect Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 This method cannot handle the CSs in which the prior part contains some modifiers and the posterior part contains nothing corresponding to them (example(vi), Figure 9 ). For these structures we must think the path extending upward in a partial matrix, but it is impossible by the criteria about word similarities alone.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 168, |
|
"text": "Figure 9", |
|
"ref_id": "FIGREF12" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Examples of Incorrect Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The CSs such as example(v) and example(vi) cannot be analyzed correctly without semantic informs-tion. fIowever such expressions are very few in actual text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Examples of Incorrect Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We have shown that varieties of parallel structures in Japanese sentences can be detected by the method explained in this paper. As the result, a long sentence can be reduced into a short one, and the success rate of syntactic analysis of these long sentences will bccome very high.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concluding Remarks", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There are still some conjunctive expressions which cannot be recognized by the proposed method, and we are tempted to rely on semantic information to get proper analyses for these remaining cases. Semantic information, however, is not so reliable as syntactic information, and we have to make further efforts to find out syntactic rather than semantic relations in these difficult cases. We think that it is possible. One thing which is certain is that we have to see many more components simultaneously in a wider range of word strings of a loug sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Concluding Remarks", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We (Io not halldle Colljullclive predicatiw~ el*tune* cteatexl by the Itcnyoh fc*rtns of predicates (|{enyoh c|nmshi-ho) which do ltOt accompany COllllll*t, })e\u00a2llll~: almost all of these prc,llc,ties iilOdify thc llCXL llt~al\u00a2~st [)l'edicltte lilld there is 11~) need t,~ chc<:k the possibility of conjunct|oil.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Conjunctive Phrases in Scientific and Technical Papers and Their Analysis", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Nagao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Tanaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lshikawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1983, |
|
"venue": "IPSJ-WG", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Nagao, J. Tsujii, N. Tanaka, M. lshikawa (1983) Conjunctive Phrases in Scientific and Technical Papers and Their Analysis (in Japanese). IPSJ-WG, NL-36-4.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Coordinate Structures", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Shudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Yoshimura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Tsuda", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "Japanese Tehuical Sentences", |
|
"volume": "27", |
|
"issue": "", |
|
"pages": "183--190", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Shudo, K. Yoshimura, K. Tsuda (1986) Co- ordinate Structures in Japanese Tehuical Sen- tences (in Japanese). 7~'ans.IPS Japan, Vol.27, No.2, pp.183-190.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "(ii) .,./~.~ag (so.,'~e l.avuage text) ~9 (o]) g~dCc (analysis) ~(and) ~1~ ( ta,'gct language tezt) \u00a9 ( oJ) ~k ~ ~ (.qeneration) ... (iii) ... ~'~g~ ( so,,'ce languaqe text) ~i$~T b ( aualyziag) ~t~t{(processmg) ~(a,d) ~Hl~g~: (target language text) _d~r~3 ~ 6 (generating) ~..~\" (processing) . .. I Conjunctive predicative clauses tag) , ~AI=J2~ ( ta,'uet l.,,9uage text) tl:.~'# ~ (9c,ertatittg) (~ (processi,,g) ...). tLl~ (9eneration) :eta (]o,.) *1Jill L @ ~ (do .ot ~e)", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "If both IWs are nouns and they match par tially by character level, ad<l the number of matchin~ characters x 2 ]mints. .................. r ................... A path.", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "t,(i, ~3~ .......... i'---~i, j-I )-~i, 9 An ignored element. ..... ~ ......... ~,~: .......... ! ....... i c5~.( \",, i ..... i ........ -2 'v-\\ ........ ~ ....... ..... i ........... :;' i.:' :~':~ .....~ .......", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Penalty points.is the upper right part of a KB (Figure 1). In tile following, 1 indicates the number of bunsetsus and a(n, n) is a KB. We define a path as a series of elements from a non-zero element in the lowest row to an element in the leftmost column of a partial matrix(Figure 1).path ::= (a(pl, m), a(p2 .... 1) ..... a(p ..... + 1)), where n + l < m <1, a(pl,m) \u00a2 O, Pi = n, PI>>.PI+I( 1 <i<m-n-1).", |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "h ~. L'C ~ (As to A) .... -cab 9 (be), f~ ~ L~c ~ (~s to ~l) .... -c.~ (~e).", |
|
"uris": null |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "~vl t .... ~--.~---~---, ~---,.---~.--~---,---~--~..-,", |
|
"uris": null |
|
}, |
|
"FIGREF6": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "The best path from a element. ~:~-d ........... ~'.7 ~F \"'~'\"'~'~J'ne'n,aximtun path. The maxinmm path specifying a conjunctive structure. ACqES DE COLING-92, NANTES, 23-28 ^o~r 1992 1 7 3 PROC. OV COLING-92, NAWrEs, AIJo. 23-28, 1992", |
|
"uris": null |
|
}, |
|
"FIGREF8": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "An example of analyzing conjunctive structures (a) 7't~szl!lllH$. 2 ~/\" ~\" is iteiua~a. Pro~)ramming l~a~uises ame de fn~M to have objectives that they c~n d~cribe ~arious co~Is of p~oblem fields, that they can ~Irictly describe algoritlm~ for *olving \u2022 problem, and that they cia drive fuactions of m computer tuffickmfly.", |
|
"uris": null |
|
}, |
|
"FIGREF9": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "An example of analyzing conjunctive structures (b)", |
|
"uris": null |
|
}, |
|
"FIGREF10": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "in that)\" which are the outside of tile CS and the bonus points by the AW \" ~ v, 5 \" in the last bunsetsu of the CS .AcrEs DE COLING-92, NANTES. 23-28 Aot~'experiment is bener ill that infeasible cxpaiments can be dolm slid paran~eters il~z~cesuible to \u00a2xpelltllent or ~?aliOll \u00a2il1 be measttred All example of analyzing conjunctive structures (C).", |
|
"uris": null |
|
}, |
|
"FIGREF11": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "~,~\" (grammar)\". By this condition it can be~-~timated that only \" 17~1/~[0 (natural langlage) MI~ ~ (analysis and)\" or \"~i:~: (analysis and)\"", |
|
"uris": null |
|
}, |
|
"FIGREF12": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "l []~A~$~ ( di.Oieultv) 3 3:=Hr b Sb ~5 (can be thought).(ii) ... H;tg~Jt~3~ll~to~t~ (Japanese dialogue analytic module), r V~ltyf~\u00a9 (o~ the analysis procesa)~l~r~ (control) I~t ~ ~ (be free) T ~' ~\" 4 P\" \" ~\" ~r --b ~:~\u00a3~ )\" (Active Chart Parsing and) ffl--~ ~ (on unification) ~\"~ ~ ~ (based) ~t~W -tl~ (lexicon based) 3~.tt:~tg~\"C ~b i5 (being the grammatical framework)3 H P S G ~\" ( HPSG)J J~ L ~ ~ ~, (be adopted).(iii) [-~--t9 (one) 3~ (grammaO ~1~\u00a2~]~\u00a9 (natural language) t~lq~ ~ (analysis and) t5~3~2 (generation)J fIJ~a (using) ~gl5]~3~je~a) (of bi-directional grammar) ~?l (the research), 3 ilf~f~agA:i)~t9 \u00a2~ (in point of computational linguistics), ~ll~ll~t~ (machine translation and) 1~t~4 Y \u2022 7 ~-:x ~. w-9 ?c (such as natural language interlace) l~Jl~J~\u00a29 ~ (from the point of view of an application} ~.'Oi15~a (be importaut). (73chs) (iv) ~ (in fact), tll~l~ ~ ~ (authors) ~ r U k'b ~\" (it) ~E.9\"X: (using), ~ll)'J~ff~l~J~r~: (gravitationally interacting) 3F.~. \"j\" ~5 (governing)J ~{tgto (astronomical) ~-gw'~ (about the motion), ?~\u00a2I~PJ\u00a3~ (high-precision) ill.a) (high-speed) I~1\"~ ~: (numerical computation) ~ ~ t5 (can) ~ 4 ~ ~ x. . ~ t. ~3 --& ~ 5 (called Digital Orrery) ~ = :/~\" ~ --$t -t (s~vial-pu,'pase computer)~t.'r ~ ~ (create). J (v) ... f [-~II+3~I~T~ (for illegal sentences) ~3[:~:'~ (termination and) Ikl~ ?5 (outputted) ~ (o) sen'fences) ~bw ~ t~ ~ a) (of ambiguities)3 .]:~t~:'gv~'~ (about the maximum)J ~rk v. (there is no guarantee). (vi) ... r~ ~ (/or every expression) J~btc (prepared) [-~aa) ( in a combinative structure) ~__._~ ~ (combinative elements) 3~r~O (in a sentence) lgh~.~ ~\" \u00a29 (between case elements)3 J ~.~: (correspondence) ... i&~&l= 0 2 2 2 2 2 (tctev~lexp*~*au) i ~L?: O 0 0 0 0 <pr+~.,tred) 4\\XI~'~I~I~*6D g 81 5 2 (ham~mbinltivel~alcture) An example of failure of analysis.", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td colspan=\"3\">'l'alfle 1: Wards indicating conjunctive structures.</td></tr><tr><td>~-</td><td colspan=\"2\">(a) Conjunctive noun phrases</td></tr><tr><td colspan=\"3\">|$,,/ctl *a 6@|c a55~,tt ~a b < ~t</td></tr><tr><td colspan=\"3\">L. ~)_.~+(~ Conjunctive predic~ttive clauses</td></tr><tr><td>~.la</td><td colspan=\"2\">(c) Conjunctive incomplete structures</td></tr><tr><td/><td>~ ~-Tgg~a ~gggK</td><td>~ETK</td></tr><tr><td colspan=\"3\">' + ' means succession of words. Characters in ~( )' may</td></tr><tr><td colspan=\"2\">or may not aplmar.</td></tr><tr><td colspan=\"3\">can find these phrases by tile words for conjunction</td></tr><tr><td colspan=\"3\">listed up in Table l(a). Each conjunctive noun some-</td></tr><tr><td colspan=\"3\">times has adjectival modifiers (Table 2(il)) or clause</td></tr><tr><td colspan=\"2\">modifiers (Table 2(iii)).</td></tr><tr><td/><td/><td>, 1992</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td>Conjunctive noun phrases</td></tr><tr><td>(i) ... lMgr (analysis) ~ ( a,,l} *_l:~ + (generation) ...</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "Examples of conjunctive structures.", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td>~ll~t A-tA~</td><td>(~Josjultctive noun phrases</td></tr><tr><td>4</td><td/></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "Words for honuses.", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td>(i) r ta.b (these) ~\u00a9</td><td>(of analysis methods) ~i~ L \u00a2c (common) Pdl~ & L T_ (as problems) ~l~lJ~: (grammar</td></tr><tr><td colspan=\"2\">rules) 9k ~ < ~k-9 ~: (increasing)</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "Examples of failure of analysis. Jib.a9 (in the case)", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |