Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C04-1024",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:19:51.262424Z"
},
"title": "Efficient Parsing of Highly Ambiguous Context-Free Grammars with Bit Vectors",
"authors": [
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"addrLine": "Azenbergstr. 12",
"postCode": "D-70174",
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "An efficient bit-vector-based CKY-style parser for context-free parsing is presented. The parser computes a compact parse forest representation of the complete set of possible analyses for large treebank grammars and long input sentences. The parser uses bit-vector operations to parallelise the basic parsing operations. The parser is particularly useful when all analyses are needed rather than just the most probable one.",
"pdf_parse": {
"paper_id": "C04-1024",
"_pdf_hash": "",
"abstract": [
{
"text": "An efficient bit-vector-based CKY-style parser for context-free parsing is presented. The parser computes a compact parse forest representation of the complete set of possible analyses for large treebank grammars and long input sentences. The parser uses bit-vector operations to parallelise the basic parsing operations. The parser is particularly useful when all analyses are needed rather than just the most probable one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Large context-free grammars extracted from treebanks achieve high coverage and accuracy, but they are difficult to parse with because of their massive ambiguity. The application of standard chart-parsing techniques often fails due to excessive memory and runtime requirements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Treebank grammars are mostly used as probabilistic grammars and users are usually only interested in the best analysis, the Viterbi parse. To speed up Viterbi parsing, sophisticated search strategies have been developed which find the most probable analysis without examining the whole set of possible analyses (Charniak et al., 1998; Klein and Manning, 2003a) . These methods reduce the number of generated edges, but increase the amount of time needed for each edge. The parser described in this paper follows a contrary approach: instead of reducing the number of edges, it minimises the costs of building edges in terms of memory and runtime.",
"cite_spans": [
{
"start": 311,
"end": 334,
"text": "(Charniak et al., 1998;",
"ref_id": "BIBREF0"
},
{
"start": 335,
"end": 360,
"text": "Klein and Manning, 2003a)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The new parser, called BitPar, is based on a bitvector implementation (cf. (Graham et al., 1980) ) of the well-known Cocke-Younger- Kasami (CKY) algorithm (Kasami, 1965; Younger, 1967) . It builds a compact \"parse forest\" representation of all analyses in two steps. In the first step, a CKY-style recogniser fills the chart with constituents. In the second step, the parse forest is built top-down from the chart. Viterbi parses are computed in four steps.",
"cite_spans": [
{
"start": 75,
"end": 96,
"text": "(Graham et al., 1980)",
"ref_id": null
},
{
"start": 132,
"end": 144,
"text": "Kasami (CKY)",
"ref_id": null
},
{
"start": 155,
"end": 169,
"text": "(Kasami, 1965;",
"ref_id": "BIBREF3"
},
{
"start": 170,
"end": 184,
"text": "Younger, 1967)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Again, the first step is a CKY recogniser which is followed by a top-down filtering of the chart, the bottom-up computation of the Viterbi probabilities, and the top-down extraction of the best parse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organised as follows: Section 2 explains the transformation of the grammar to Chomsky normal form. The following sections describe the recogniser algorithm (Sec. 3), improvements of the recogniser by means of bit-vector operations (Sec. 4), and the generation of parse forests (Sec. 5), and Viterbi parses (Sec. 6). Section 7 discusses the advantages of the new architecture, Section 8 describes experimental results, and Section 9 summarises the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The CKY algorithm requires a grammar in Chomsky normal form where the right-hand side of each rule either consists of two non-terminals or a single terminal symbol. BitPar uses a modified version of the CKY algorithm allowing also chain rules (rules with a single non-terminal on the right-hand side). BitPar expects that the input grammar is already epsilon-free and that terminal symbols only occur in unary rules. Rules with more than 2 nonterminals on the right-hand side are split into binary rules by applying a transformation algorithm proposed by Andreas Eisele 1 . It is a greedy algorithm which tries to minimise the number of binarised rules by combining frequently cooccurring symbols first. The algorithm consists of the following two steps which are iterated until all rules are either binary or unary. A B in all grammar rules with X. Finally, add the rule X A B to the grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar Transformation",
"sec_num": "2"
},
{
"text": "In the first step, the parser computes the CKY-style recogniser chart with the algorithm shown in Figure 1. It uses the transformed grammar with grammar rules P and non-terminal symbol set N. The chart is conceptually a three-dimensional bit array containing one bit for each possible constituent. A bit is 1 if the respective constituent has been inserted into the chart and 0 otherwise. The chart is indexed by the start position, the end position and the label of a constituent 2 . Initially all bits are 0. This chart representation is particularly efficient for highly ambiguous grammars like treebank grammars where the chart is densely filled. Like other CKY-style parsers, the recogniser consists of several nested loops. The first loop (line 3 in Fig. 1 ) iterates over the end positions e of constituents, inserts the parts of speech of the next word (lines 4 and 5) into the chart, and then builds increasingly larger constituents ending at position e. To this end, it iterates over the start positions b from e-1 down to 1 (line 6) and over all non-terminals A (line 7). Inside the innermost loop, the function derivable is called to compute whether a constituent of category A covering words \u00a6 through \u00a6\u00a8 i s derivable from smaller constituents via some binary rule. derivable loops over all rules A B C with the symbol A on the left-hand side (line 11) and over all possible end positions m of the first symbol on the right-hand side of the rule (line 12). If the chart contains B from position b to m and C from position m+1 to e (line 13), the function returns true (line 14), indicating that \u00a6 through \u00a6\u00a8 a re reducible to the non-terminal A. Otherwise, the function returns false (line 15).",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 104,
"text": "Figure",
"ref_id": null
},
{
"start": 756,
"end": 762,
"text": "Fig. 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Computation of the Chart",
"sec_num": "3"
},
{
"text": "In order to deal with chain rules, the parser precomputes for each category C the set of non-terminals D which are derivable from C by a sequence of chain rule reductions, i.e. for which D C holds, and stores them in the bit vector chainvec [C] . The set includes C itself. Given the grammar rules NP DT N1, NP N1, N1 JJ N1 and N1 N, the bits for NP, N1 and N are set in chainvec [N] . When a new constituent of category A starting at position b and ending at position e has been recognised, all the constituents reachable from A by means of chain rules are simultaneously added to the chart by or-ing the precomputed bit vector chainvec[A] to chart [b] [e] (see lines 5 and 9 in Fig. 1 ).",
"cite_spans": [
{
"start": 241,
"end": 244,
"text": "[C]",
"ref_id": null
},
{
"start": 380,
"end": 383,
"text": "[N]",
"ref_id": null
},
{
"start": 650,
"end": 653,
"text": "[b]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 680,
"end": 686,
"text": "Fig. 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Computation of the Chart",
"sec_num": "3"
},
{
"text": "The first parsing step is a pure recogniser which computes the set of constituents to which the input words can be reduced, but not their analyses. Therefore it is not necessary to look for further analyses once the first analysis of a constituent has been found. The function derivable therefore returns as soon as the first analysis is finished (line 13 and 14), and derivable is not called if the respective constituent was previously derived by chain rules (line 8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of the Chart",
"sec_num": "3"
},
{
"text": "Because only one analysis has to be found and some rules are more likely than others, the algorithm is optimised by trying the different rules for each category in order of decreasing frequency (line 11). The frequency information is collected online during parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of the Chart",
"sec_num": "3"
},
{
"text": "Derivation of constituents by means of chain rules is much cheaper than derivation via binary rules. Therefore the categories in line 7 are ordered such that categories from which many other categories are derivable through chain rules, come first. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computation of the Chart",
"sec_num": "3"
},
{
"text": "The function derivable is the most timeconsuming part of the recogniser, because it is the only part whose overall runtime grows cubically with sentence length. The inner loop of the function iterates over the possible end positions of the first child constituent and computes an and-operation for each position. This loop can be replaced by a single and-operation on two bit vectors, where the first bit vector contains the bits stored in chart [ [e] | vec cannot be done with bit vector operations, anymore. Instead, each 1-bit of the bit vector has to be set separately in both copies of the chart. Binary search is used to extract the 1-bits from each machine word of a bit vector. This is more efficient than checking all bits sequentially if the number of 1-bits is small. Figure 3 shows how the 1-bits would be extracted from a 4-bit word v and stored in the set s. The first line checks whether any bit is set in v. If so, the second line checks whether one of the first two bits is set. If so, the third line checks whether the first bit is 1 and, if true, adds 0 to s. Then it checks whether the second bit is 1 and so on.",
"cite_spans": [
{
"start": 446,
"end": 447,
"text": "[",
"ref_id": null
}
],
"ref_spans": [
{
"start": 779,
"end": 787,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Using Bit-Vector Operations",
"sec_num": "4"
},
{
"text": "1 if ! # \" then 2 if % $ ' & ( & \" ( \" # \" then 3 if % $ ' & \" ( \" ( \" # \" then 4 s.add(0) 5 if % $ \" & \" ( \" # \" then 6 s.add(1) 7 if % $ \" ( \" & ( & ) # \" then 8 if % $ \" ( \" & \" # \" then 9 s.add(2) 10 if % $ \" ( \" ( \" & 0 # \" then 11",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Bit-Vector Operations",
"sec_num": "4"
},
{
"text": "s.add(3) Figure 3 : Extraction of the 1-bits from a bit vector",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 17,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Using Bit-Vector Operations",
"sec_num": "4"
},
{
"text": "The chart only provides information about the constituents, but not about their analyses. In order to generate a parse forest representation of the set of all analyses, the chart is traversed top-down, reparsing all the constituents in the chart which are part of a complete analysis of the input sentence. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Forest Generation",
"sec_num": "5"
},
{
"text": "]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parse Forest Generation",
"sec_num": "5"
},
{
"text": "]] is therefore the name of the category of the first child of the first analysis of the n th constituent. The rulenumber array is not needed to represent the structure of the parse forest, but speeds up the retrieval of rule probabilities and similar information. The parse forest is built by the function parse shown in Figure 5 . The function newnode(b,e,A) adds the number of A at the end of the catnum array. It also adds the currently biggest index of the first-child array plus 1 to the first-analysis array. It returns the largest index of the catnum array as node number. newnode also stores a mapping from the triple \u00a1 b,e,A\u00a2 to the respective node number n in a hash table. The hash table is used by get-node(b,e,A) to checks whether a constituent has already been added to the parse forest and, if true, returns its number. add-analysis(n,r,m) increments the size of the child array by 2 and adds the index of the first new element to the first-child array. It further adds the number of rule r to the rule-number array and stores the pair \u00a1 r,m\u00a2 in a temporary array which is later accessed in lines 17, 19, and 22. add-analysis(n,r) is similar, but adds just one element to the child array. Finally, the function add-child inserts the child node indices returned by recursive calls of buildsubtree. The optimisation with bit-vector operations described in section 4 is also applicable in lines 14 and 15.",
"cite_spans": [],
"ref_spans": [
{
"start": 322,
"end": 330,
"text": "Figure 5",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Parse Forest Generation",
"sec_num": "5"
},
{
"text": "Viterbi parses for probabilistic context-free grammars (PCFGs) could be extracted from context-free 1 parse (P,N,S,w\u00a3 ,..., ",
"cite_spans": [
{
"start": 108,
"end": 123,
"text": "(P,N,S,w\u00a3 ,...,",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Viterbi Parsing",
"sec_num": "6"
},
{
"text": "BitPar was developed for the generation of parse forests with large treebank grammars. It saves memory by splitting parsing into two steps, (1) the gen- eration of a recogniser chart which is compactly stored in a bit-vector, and (2) the generation of the parse forest. Parse forest nodes are only created for constituents which are part of a complete analyses, whereas standard 1-pass chart parsers create more nodes which are later abandoned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Viterbi parsing involves four steps. About 15 % of the parse time is needed for building the chart, 28 % for filtering, and 57 % for the computation of the Viterbi probabilities. The time required for the extraction of the best parse is negligible (0.04 %). The Viterbi step spends about 80 % of the time (45 % of the total time) on the computation of the probabilities and only about 20 % on the computation of the possible analyses. So, although Viterbi probabilities are only computed for nodes which are part of a valid analysis, it still takes almost half of the time to compute them, and the proportion increases with sentence length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "In contrast to most beam search parsing strategies, BitPar is guaranteed to return the most probable analysis, and there is no need to optimise any scoring functions or parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "The parser was tested with a grammar containing 65,855 grammar rules, and 4,444 different categories. The grammar was extracted from a ver-sion of the Penn treebank which was annotated with additional features similar to (Klein and Manning, 2003b) . The average rule length has 3.7 (without parent category). The experiments were conducted on a Sun Blade 1000 Model 2750 server with 750 MHz CPUs and 4 GB memory.",
"cite_spans": [
{
"start": 221,
"end": 247,
"text": "(Klein and Manning, 2003b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "8"
},
{
"text": "In a first experiment, 1000 randomly selected sentences from the PENN treebank containing 24,595 tokens were parsed. Viterbi parsing of these sentences took 27,596 seconds (1.14 seconds per word). The generation of parse forests 3 for the same sentences took 26,840 seconds (1.09 seconds per word). In another experiment, we examined how parse times increase with sentence length. Figure 9 shows the average Viterbi parse times of BitPar for randomly selected sentences of different lengths 4 . For comparison, the average parse times of the LoPar parser (Schmid, 2000) on the same data are also shown. LoPar is a 1-pass left-corner chart parser which computes the Viterbi parse from the parse forest. BitPar is faster for all sentence lengths and the growth of the parse times with sentence length is smaller than for LoPar. Although the asymptotic runtime complexity of BitPar is cubic, figure 9 shows that the exponent of the actual growth function in the range between 4 and 50 is about 2.6. This can be explained by the fact that the bit-vector operations become more effective as the length of the 3 The parse forest were only generated but not printed. 4 The two bulges of the BitPar curve were probably caused by a high processor load. The experiment will be repeated for the final version of the paper. sentence and therefore the length of the bit-vectors increases.",
"cite_spans": [
{
"start": 555,
"end": 569,
"text": "(Schmid, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 1160,
"end": 1161,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 381,
"end": 389,
"text": "Figure 9",
"ref_id": "FIGREF9"
},
{
"start": 889,
"end": 897,
"text": "figure 9",
"ref_id": "FIGREF9"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "8"
},
{
"text": "The memory requirements of BitPar are far lower than those of LoPar. LoPar needs about 1 GB memory to parse sentences of length 22, whereas BitPar allocates 180 MB during parse forest generation and 55 MB during Viterbi parsing. For the longest sentence in our 1000 sentence test corpus with length 55, BitPar needed 113 MB to generate the Viterbi parse and 3,185 MB to compute the parse forest. LoPar was unable to parse sentences of this length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "8"
},
{
"text": "We are planning to evaluate the influence of the different optimisations presented in the paper on parsing speed and to compare it with other parsers than LoPar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "8"
},
{
"text": "A bit-vector based implementation of the CKY algorithm for large highly ambiguous grammars was presented. The parser computes in the first step a recogniser chart and generates the parse forest in a second step top-down by reparsing the entries of the chart. Viterbi parsing consists of four steps comprising (i) the generation of the chart, (ii) top-down filtering of the chart, (iii) computation of the Viterbi probabilities, and (iv) the extraction of the Viterbi parse. The basic parsing operation (building new constituents by combining two constituents according to some binary rule) is parallelised by means of bit-vector operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "9"
},
{
"text": "The presented method is efficient in terms of runtime as well as space requirements. The empirical runtime complexity (measured for sentences with up to 50 words) is better than cubic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "9"
},
{
"text": "The presented parser is particularly useful when the whole set of analyses has to be computed rather than the best parse. The Viterbi version of the parser is guaranteed to return the most probable parse tree and requires no parameter tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "9"
},
{
"text": "Start and end position of a constituent are the indices of the first and the last word covered by the constituent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "edge-based best-first chart parsing",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Johnson",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Sixth Workshop on Very Larger Corpora",
"volume": "",
"issue": "",
"pages": "127--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charniak, E., Goldwater, S., and Johnson, M. (1998). edge-based best-first chart parsing. In Pro- ceedings of the Sixth Workshop on Very Larger Cor- pora, pages 127-133. Morgan Kaufmann.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An improved context-free recognizer",
"authors": [],
"year": null,
"venue": "ACM Transactions on Programming Languages and Systems",
"volume": "2",
"issue": "3",
"pages": "415--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "An improved context-free recognizer. ACM Trans- actions on Programming Languages and Systems, 2(3):415-462.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An efficient recognition and syntax analysis algorithm for context-free languages",
"authors": [
{
"first": "T",
"middle": [],
"last": "Kasami",
"suffix": ""
}
],
"year": 1965,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kasami, T. (1965). An efficient recognition and syntax analysis algorithm for context-free lan- guages. Technical Report AFCRL-65-758, Air Force Cambridge Research Laboratory.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A* parsing: Fast exact viterbi parse selection",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT-NAACL 03",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klein, D. and Manning, C. D. (2003a). A* parsing: Fast exact viterbi parse selection. In Proceedings of HLT-NAACL 03.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klein, D. and Manning, C. D. (2003b). Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "LoPar: Design and Implementation. Number 149 in Arbeitspapiere des Sonderforschungsbereiches 340",
"authors": [
{
"first": "H",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 1967,
"venue": "Recognition of contextand Control",
"volume": "10",
"issue": "",
"pages": "189--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schmid, H. (2000). LoPar: Design and Imple- mentation. Number 149 in Arbeitspapiere des Son- derforschungsbereiches 340. Institute for Computa- tional Linguistics, University of Stuttgart. Younger, D. H. (1967). Recognition of context- and Control, 10:189-208.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Compute the frequencies of the pairs of neighboring symbols on the right-hand sides of rules. the most frequent pair \u00a1 A,B\u00a2 . Add a new non-terminal X. Replace the symbol pair 1 personal communication",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "CKY-recogniser",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "The chart is actually implemented as a single large bit-vector with access functions translating index triples (start position, end position, and symbol number) to vector positions. The bits in the chart are ordered such that chart[b][e][n+1] follows after chart[b][e][n], allowing the efficient insertion of a set of bits with an or-operation on bit vectors.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "optimised CKY-recogniser Due to the new representation of the chart, the insertion of bits into the chart by means of the operation chart[b][e] \u00a5 chart[b]",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF5": {
"text": "Parse forest with two analyses for A",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF6": {
"text": "Parse forest generation parse forests, but BitPar computes them without building the parse forest in order to save space. After building the recogniser chart, the Viterbi version of BitPar filters the chart as shown inFigure 6in order to eliminate constituents which are not part of a complete analysis.After filtering the chart, the Viterbi probabilities of the remaining constituents are computed by the algorithm in figure 7. p[b][e][A] is implemented with a hash table. The value of prob(r) is 1 if the lefthand side of r is an auxiliary symbol inserted during the grammar transformation and otherwise the probability of the corresponding PCFG rule. Finally, the algorithm of figure 8 prints the Viterbi parse.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF7": {
"text": "(P,b,e,A,chart,chart2) if chart2[b][e][A] = 1 then return // chart2[b][e][A] was processed before chart2[b][e][A] if chart[b][m][B] = 1 and chart[m+1][e][C] = 1 then filter-subtree(P,b,m,B,chart,chart2) filter-subtree(P,m+1,e,C,chart,chart2) b][m][B] = 1 and chart[m+1][e][C] = 1 then add-prob(b,m,e,A,r) add-prob(b,m,e,A,rGeneration of Viterbi parse",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF9": {
"text": "Average parse times",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>1 recognise(P,N,w\u00a3 ,...,w\u00a4 )</td></tr><tr><td>2 3 4 5 6 7 8 9 10 11 12 13 14 15 derivable(P,N,b,e,A) allocate and initialise chart[1..n][1..n][N] to 0 allocate vec[N] for e \u00a5 1 to n do initialise vec[N] to 0 for each non-terminal A with A \u00a7 \u00a6 P do vec \u00a5 vec | chainvec[A] chart[e][e] \u00a5 chart[e][e] | vec for b \u00a5 e 1 down to 1 do initialise vec[N] to 0 for each non-terminal A N do if vec[A] = 0 and derivable(P,N,b,e,A) then vec \u00a5 vec | chainvec[A] chart[b][e] chart[b][e] | vec \u00a5 16 for each rule A B C P do 17 vec1 \u00a5 chart[b][b...e-1][B] 18 vec2 \u00a5 chart[b+1...e][e][C] 19 return vec1 &amp; vec2 0</td></tr></table>",
"text": "b][b][B], chart[b][b+1][B] ... chart[b][e-1][B] and the second bit vector contains the bits stored in chart[b+1][e][C], chart[b+2][e][C] ... chart[e][e][C]. The bit-vector operation is overall more efficient than the solution shown in Figure 1 if the extraction of the two bit vectors from the chart is fast enough. If the bits in the chart are ordered such that chart[b][1][A] ... chart[b][N][A] are in sequence, the first bit vector can be efficiently extracted by block-wise copying. The same holds for the second bit vector if the bits are ordered such that chart[1][e][A] ... chart[n][e][A] are in sequence.Therefore, the chart of the parser which uses bit-vector operations, internally consists of two bit vectors. New bits are inserted in both vectors."
},
"TABREF2": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>sentence. catname[catnum[child[first-</td></tr><tr><td>child[first-analysis[n]]</td></tr><tr><td>, the value of child[d] is stead. A negative value in the child array there-\u00a9 1 ' 2 &amp; in-fore indicates a terminal node and allows decod-ing of the position of the respective word in the</td></tr></table>",
"text": "The parse forest is stored by means of six arrays named catname, catnum, first-analysis, rulenumber, first-child, and child. catnum[n] contains the number of the category of the n th constituent. first-analysis[n] is the index of the first analysis of the n th constituent, and first-analysis[n+1]-1 is the index of the last analysis. rule-number[a] returns the rule number of analysis a, and firstchild[a] contains the index of its first child node number in the child array. The numbers of the other child nodes are stored at the following positions. child[d] is normally the number of the node which forms child d. However, if the child with number d is the input word\u00a6\u00a8"
}
}
}
}